Jan 30 08:29:53 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 08:29:53 crc restorecon[4576]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:53 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:29:54 crc restorecon[4576]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:29:54 crc restorecon[4576]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 30 08:29:55 crc kubenswrapper[4758]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 08:29:55 crc kubenswrapper[4758]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 08:29:55 crc kubenswrapper[4758]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 08:29:55 crc kubenswrapper[4758]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 08:29:55 crc kubenswrapper[4758]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 08:29:55 crc kubenswrapper[4758]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.491112 4758 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497091 4758 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497127 4758 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497141 4758 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497152 4758 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497161 4758 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497171 4758 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497179 4758 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497187 4758 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497194 4758 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497202 4758 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497209 4758 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497220 4758 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497230 4758 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497239 4758 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497248 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497257 4758 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497265 4758 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497273 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497283 4758 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497306 4758 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497315 4758 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497324 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497333 4758 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497342 4758 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497350 4758 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497358 4758 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497368 4758 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497376 4758 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497383 4758 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497391 4758 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497399 4758 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497407 4758 feature_gate.go:330] unrecognized feature gate: Example Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497415 4758 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497423 4758 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497430 4758 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497438 4758 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497446 4758 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497455 4758 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497463 4758 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497471 4758 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497478 4758 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497486 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497495 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497503 4758 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497511 4758 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497518 4758 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497526 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497534 4758 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497541 4758 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497549 4758 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497556 4758 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497564 4758 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497572 4758 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497580 4758 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497587 4758 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497595 4758 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497602 4758 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497616 4758 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497625 4758 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497636 4758 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497646 4758 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497656 4758 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497666 4758 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497677 4758 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497687 4758 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497695 4758 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497703 4758 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497711 4758 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497719 4758 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497729 4758 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.497738 4758 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.497870 4758 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.497885 4758 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.497899 4758 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.497911 4758 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.497922 4758 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.497931 4758 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.497942 4758 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.497953 4758 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.497963 4758 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.497972 4758 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.497981 4758 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.497991 4758 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498000 4758 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498009 4758 flags.go:64] FLAG: --cgroup-root="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498019 4758 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498027 4758 flags.go:64] FLAG: --client-ca-file="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498066 4758 flags.go:64] FLAG: --cloud-config="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498076 4758 flags.go:64] FLAG: --cloud-provider="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498085 4758 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498097 4758 flags.go:64] FLAG: --cluster-domain="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498106 4758 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498115 4758 flags.go:64] FLAG: --config-dir="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498124 4758 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498134 4758 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498146 4758 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498155 4758 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498164 4758 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498174 4758 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498183 4758 flags.go:64] FLAG: --contention-profiling="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498192 4758 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498201 4758 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498210 4758 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498220 4758 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498231 4758 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498240 4758 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498249 4758 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498258 4758 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498267 4758 flags.go:64] FLAG: --enable-server="true" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498276 4758 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498288 4758 flags.go:64] FLAG: --event-burst="100" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498297 4758 flags.go:64] FLAG: --event-qps="50" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498306 4758 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498315 4758 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498324 4758 flags.go:64] FLAG: --eviction-hard="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498335 4758 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498344 4758 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498352 4758 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498361 4758 flags.go:64] FLAG: --eviction-soft="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498370 4758 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498380 4758 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498389 4758 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498398 4758 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498407 4758 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498416 4758 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498424 4758 flags.go:64] FLAG: --feature-gates="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498435 4758 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498444 4758 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498453 4758 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498463 4758 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498471 4758 flags.go:64] FLAG: --healthz-port="10248" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498481 4758 flags.go:64] FLAG: --help="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498489 4758 flags.go:64] FLAG: --hostname-override="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498498 4758 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498507 4758 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498516 4758 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498525 4758 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498534 4758 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498542 4758 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498552 4758 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498561 4758 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498570 4758 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498579 4758 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498588 4758 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498598 4758 flags.go:64] FLAG: --kube-reserved="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498607 4758 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498617 4758 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498626 4758 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498635 4758 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498644 4758 flags.go:64] FLAG: --lock-file="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498653 4758 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498662 4758 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498674 4758 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498691 4758 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498703 4758 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498714 4758 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498724 4758 flags.go:64] FLAG: --logging-format="text" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498734 4758 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498744 4758 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498753 4758 flags.go:64] FLAG: --manifest-url="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498761 4758 flags.go:64] FLAG: --manifest-url-header="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498772 4758 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498782 4758 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498793 4758 flags.go:64] FLAG: --max-pods="110" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498802 4758 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498811 4758 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498820 4758 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498829 4758 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498838 4758 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498847 4758 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498856 4758 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498874 4758 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498883 4758 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498894 4758 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498904 4758 flags.go:64] FLAG: --pod-cidr="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498915 4758 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498928 4758 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498937 4758 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498946 4758 flags.go:64] FLAG: --pods-per-core="0" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498955 4758 flags.go:64] FLAG: --port="10250" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498964 4758 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498973 4758 flags.go:64] FLAG: --provider-id="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498982 4758 flags.go:64] FLAG: --qos-reserved="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.498991 4758 flags.go:64] FLAG: --read-only-port="10255" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499000 4758 flags.go:64] FLAG: --register-node="true" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499009 4758 flags.go:64] FLAG: --register-schedulable="true" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499017 4758 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499032 4758 flags.go:64] FLAG: --registry-burst="10" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499071 4758 flags.go:64] FLAG: --registry-qps="5" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499081 4758 flags.go:64] FLAG: --reserved-cpus="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499089 4758 flags.go:64] FLAG: --reserved-memory="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499100 4758 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499109 4758 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499119 4758 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499128 4758 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499137 4758 flags.go:64] FLAG: --runonce="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499146 4758 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499155 4758 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499164 4758 flags.go:64] FLAG: --seccomp-default="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499173 4758 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499182 4758 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499191 4758 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499200 4758 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499209 4758 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499219 4758 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499228 4758 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499237 4758 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499273 4758 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499283 4758 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499293 4758 flags.go:64] FLAG: --system-cgroups="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499302 4758 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499317 4758 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499326 4758 flags.go:64] FLAG: --tls-cert-file="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499335 4758 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499346 4758 flags.go:64] FLAG: --tls-min-version="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499355 4758 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499364 4758 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499373 4758 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499382 4758 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499391 4758 flags.go:64] FLAG: --v="2" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499402 4758 flags.go:64] FLAG: --version="false" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499414 4758 flags.go:64] FLAG: --vmodule="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499425 4758 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.499435 4758 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499700 4758 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499715 4758 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499729 4758 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499741 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499751 4758 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499760 4758 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499769 4758 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499777 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499785 4758 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499793 4758 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499802 4758 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499811 4758 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499821 4758 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499829 4758 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499837 4758 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499846 4758 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499854 4758 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499862 4758 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499869 4758 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499877 4758 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499886 4758 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499894 4758 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499902 4758 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499925 4758 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499935 4758 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499944 4758 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499952 4758 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499960 4758 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499968 4758 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499977 4758 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499985 4758 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.499993 4758 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500002 4758 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500011 4758 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500019 4758 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500027 4758 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500064 4758 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500096 4758 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500106 4758 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500113 4758 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500121 4758 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500129 4758 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500143 4758 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500152 4758 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500162 4758 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500171 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500182 4758 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500191 4758 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500201 4758 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500209 4758 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500217 4758 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500225 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500234 4758 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500243 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500250 4758 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500258 4758 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500266 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500273 4758 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500282 4758 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500307 4758 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500315 4758 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500323 4758 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500331 4758 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500338 4758 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500346 4758 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500354 4758 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500362 4758 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500369 4758 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500380 4758 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500390 4758 feature_gate.go:330] unrecognized feature gate: Example Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.500399 4758 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.500411 4758 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.512463 4758 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.512504 4758 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512628 4758 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512642 4758 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512652 4758 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512661 4758 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512670 4758 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512680 4758 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512692 4758 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512708 4758 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512724 4758 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512735 4758 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512747 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512757 4758 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512766 4758 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512775 4758 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512784 4758 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512793 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512802 4758 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512810 4758 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512819 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512827 4758 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512839 4758 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512849 4758 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512859 4758 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512867 4758 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512876 4758 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512884 4758 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512893 4758 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512901 4758 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512909 4758 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512918 4758 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512927 4758 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512937 4758 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512945 4758 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512953 4758 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512963 4758 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512972 4758 feature_gate.go:330] unrecognized feature gate: Example Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512980 4758 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512989 4758 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.512997 4758 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513005 4758 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513013 4758 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513022 4758 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513030 4758 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513065 4758 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513073 4758 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513082 4758 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513090 4758 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513098 4758 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513107 4758 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513115 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513124 4758 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513132 4758 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513140 4758 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513152 4758 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513162 4758 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513171 4758 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513180 4758 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513189 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513197 4758 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513205 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513213 4758 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513222 4758 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513231 4758 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513240 4758 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513248 4758 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513256 4758 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513264 4758 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513273 4758 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513282 4758 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513289 4758 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513299 4758 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.513313 4758 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513539 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513555 4758 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513568 4758 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513578 4758 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513587 4758 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513597 4758 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513606 4758 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513614 4758 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513622 4758 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513631 4758 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513640 4758 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513649 4758 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513657 4758 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513666 4758 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513673 4758 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513682 4758 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513690 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513698 4758 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513707 4758 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513715 4758 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513723 4758 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513731 4758 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513740 4758 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513748 4758 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513758 4758 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513766 4758 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513775 4758 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513783 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513792 4758 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513800 4758 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513808 4758 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513816 4758 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513824 4758 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513833 4758 feature_gate.go:330] unrecognized feature gate: Example Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513842 4758 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513850 4758 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513859 4758 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513867 4758 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513875 4758 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513884 4758 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513892 4758 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513900 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513908 4758 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513916 4758 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513925 4758 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513933 4758 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513941 4758 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513950 4758 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513958 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513966 4758 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513974 4758 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513985 4758 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.513995 4758 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514005 4758 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514015 4758 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514024 4758 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514038 4758 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514069 4758 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514078 4758 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514086 4758 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514094 4758 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514103 4758 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514111 4758 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514119 4758 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514127 4758 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514135 4758 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514146 4758 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514156 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514165 4758 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514174 4758 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.514185 4758 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.514198 4758 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.515631 4758 server.go:940] "Client rotation is on, will bootstrap in background" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.521190 4758 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.521318 4758 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.523015 4758 server.go:997] "Starting client certificate rotation" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.523085 4758 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.523295 4758 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-22 05:05:14.184962611 +0000 UTC Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.523389 4758 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.560665 4758 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 08:29:55 crc kubenswrapper[4758]: E0130 08:29:55.564743 4758 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.565899 4758 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.584958 4758 log.go:25] "Validated CRI v1 runtime API" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.628365 4758 log.go:25] "Validated CRI v1 image API" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.630474 4758 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.635277 4758 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-30-08-24-30-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.635321 4758 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.654161 4758 manager.go:217] Machine: {Timestamp:2026-01-30 08:29:55.651815692 +0000 UTC m=+0.624127323 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199476736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:4febaf4d-16fb-4d22-878e-0234bcbe9a79 BootID:17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599738368 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:be:84:ae Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:be:84:ae Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:26:0f:ea Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:53:6f:80 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ef:46:63 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:70:9f:88 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:b2:6d:69:d8:2c:d2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:be:77:01:f5:cd:4b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199476736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.654568 4758 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.654734 4758 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.657750 4758 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.658128 4758 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.658177 4758 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.658498 4758 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.658519 4758 container_manager_linux.go:303] "Creating device plugin manager" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.659980 4758 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.660031 4758 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.660324 4758 state_mem.go:36] "Initialized new in-memory state store" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.660456 4758 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.663870 4758 kubelet.go:418] "Attempting to sync node with API server" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.663906 4758 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.663945 4758 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.663966 4758 kubelet.go:324] "Adding apiserver pod source" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.663983 4758 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.669576 4758 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.670870 4758 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.671838 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.671829 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Jan 30 08:29:55 crc kubenswrapper[4758]: E0130 08:29:55.671979 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:29:55 crc kubenswrapper[4758]: E0130 08:29:55.671930 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.672518 4758 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.674566 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.674609 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.674623 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.674637 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.674659 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.674695 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.674713 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.674742 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.674759 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.674772 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.674793 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.674807 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.676493 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.677342 4758 server.go:1280] "Started kubelet" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.677523 4758 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.678622 4758 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.679474 4758 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.679693 4758 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Jan 30 08:29:55 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.679743 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.681033 4758 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.681203 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 14:41:36.115364741 +0000 UTC Jan 30 08:29:55 crc kubenswrapper[4758]: E0130 08:29:55.682079 4758 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.682771 4758 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.682801 4758 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.682971 4758 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 08:29:55 crc kubenswrapper[4758]: E0130 08:29:55.683057 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="200ms" Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.684366 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Jan 30 08:29:55 crc kubenswrapper[4758]: E0130 08:29:55.684529 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.687498 4758 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.687529 4758 factory.go:55] Registering systemd factory Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.687542 4758 factory.go:221] Registration of the systemd container factory successfully Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.688788 4758 factory.go:153] Registering CRI-O factory Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.688831 4758 factory.go:221] Registration of the crio container factory successfully Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.688896 4758 factory.go:103] Registering Raw factory Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.688924 4758 manager.go:1196] Started watching for new ooms in manager Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.690568 4758 manager.go:319] Starting recovery of all containers Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.691759 4758 server.go:460] "Adding debug handlers to kubelet server" Jan 30 08:29:55 crc kubenswrapper[4758]: E0130 08:29:55.696633 4758 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.176:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f74fa006718f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 08:29:55.67729688 +0000 UTC m=+0.649608481,LastTimestamp:2026-01-30 08:29:55.67729688 +0000 UTC m=+0.649608481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.714859 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.714983 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715012 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715065 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715089 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715109 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715131 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715153 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715186 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715210 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715231 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715255 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715277 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715305 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715325 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715346 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715367 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715389 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715410 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715430 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715450 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715494 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715518 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715539 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715564 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715585 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715616 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715637 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715660 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715681 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715717 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715766 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715800 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715827 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715853 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715879 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715908 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715937 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715965 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.715989 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716020 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716093 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716122 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716147 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716173 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716202 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716228 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716257 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716287 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716317 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716344 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716372 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716413 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716444 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716472 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716502 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716531 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716560 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716588 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716616 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716643 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716670 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716722 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716750 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716771 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716796 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716816 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716836 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716897 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716918 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716940 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716967 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.716999 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717108 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717162 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717213 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717238 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717267 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717292 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717319 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717346 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717373 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717395 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717417 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717443 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717471 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717500 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717527 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717559 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717587 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717609 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717631 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717651 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717672 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717691 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717712 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717732 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717753 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717773 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717793 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717813 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717878 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717898 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717923 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717956 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.717984 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718013 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718080 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718108 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718137 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718165 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718195 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718224 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718251 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718279 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718298 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718317 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718338 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718358 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718377 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718400 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718431 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718516 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718539 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718560 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718581 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718600 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718618 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718638 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718658 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718678 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718697 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718745 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718766 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718787 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718806 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718828 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718847 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718866 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718884 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718904 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718924 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718944 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718963 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.718985 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719006 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719027 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719071 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719096 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719116 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719134 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719153 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719176 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719198 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719218 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719237 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719256 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719276 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719294 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719315 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719336 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719357 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719377 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719424 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719447 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719465 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719483 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719503 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719525 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719553 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719580 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719600 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719619 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719638 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719658 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719676 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719695 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719715 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719737 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719760 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719780 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.719800 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.723991 4758 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724064 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724092 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724113 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724133 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724153 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724171 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724192 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724211 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724230 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724251 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724273 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724294 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724317 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724347 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724392 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724416 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724437 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724459 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724479 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724498 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724565 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724588 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724608 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724627 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724646 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724666 4758 reconstruct.go:97] "Volume reconstruction finished" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.724680 4758 reconciler.go:26] "Reconciler: start to sync state" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.729587 4758 manager.go:324] Recovery completed Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.741496 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.743111 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.743174 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.743202 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.749990 4758 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.750032 4758 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.750129 4758 state_mem.go:36] "Initialized new in-memory state store" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.762892 4758 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.765297 4758 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.767258 4758 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.767326 4758 kubelet.go:2335] "Starting kubelet main sync loop" Jan 30 08:29:55 crc kubenswrapper[4758]: E0130 08:29:55.767484 4758 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 08:29:55 crc kubenswrapper[4758]: W0130 08:29:55.768710 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Jan 30 08:29:55 crc kubenswrapper[4758]: E0130 08:29:55.768757 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.769162 4758 policy_none.go:49] "None policy: Start" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.770574 4758 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.770704 4758 state_mem.go:35] "Initializing new in-memory state store" Jan 30 08:29:55 crc kubenswrapper[4758]: E0130 08:29:55.783117 4758 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.841233 4758 manager.go:334] "Starting Device Plugin manager" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.841275 4758 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.841287 4758 server.go:79] "Starting device plugin registration server" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.841713 4758 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.841724 4758 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.841897 4758 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.841959 4758 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.841966 4758 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 08:29:55 crc kubenswrapper[4758]: E0130 08:29:55.849393 4758 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.868291 4758 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.868376 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.869727 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.869764 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.869772 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.869919 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.870117 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.870179 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.871001 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.871012 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.871022 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.871032 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.871032 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.871080 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.871184 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.871312 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.871377 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.872630 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.872659 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.872671 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.875245 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.875288 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.875302 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.875484 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.875757 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.875856 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.877079 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.877092 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.877101 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.877110 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.877121 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.877112 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.877330 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.877705 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.877741 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.878656 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.878703 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.878715 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.878722 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.878747 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.878765 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.878972 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.879012 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.880063 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.880090 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.880098 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:55 crc kubenswrapper[4758]: E0130 08:29:55.883801 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="400ms" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.926396 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.926444 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.926483 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.926501 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.926519 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.926572 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.926611 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.926639 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.926687 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.926766 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.926827 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.926881 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.926955 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.927007 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.927121 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.941834 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.943540 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.943644 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.943674 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:55 crc kubenswrapper[4758]: I0130 08:29:55.943721 4758 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 08:29:55 crc kubenswrapper[4758]: E0130 08:29:55.944371 4758 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.176:6443: connect: connection refused" node="crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028389 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028431 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028470 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028487 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028504 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028536 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028569 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028584 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028615 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028630 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028629 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028653 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028670 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028723 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028766 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028801 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028851 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028801 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028646 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028883 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028703 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028901 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.028966 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.029025 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.029023 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.029109 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.029132 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.029198 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.029273 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.029354 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.144628 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.146195 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.146254 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.146276 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.146318 4758 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 08:29:56 crc kubenswrapper[4758]: E0130 08:29:56.146899 4758 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.176:6443: connect: connection refused" node="crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.207318 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.217104 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.235298 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.259746 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: W0130 08:29:56.261567 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-76707601e6148eee25570d2111c151a7287a77c842ba9d9b80688c5cc3b2fe82 WatchSource:0}: Error finding container 76707601e6148eee25570d2111c151a7287a77c842ba9d9b80688c5cc3b2fe82: Status 404 returned error can't find the container with id 76707601e6148eee25570d2111c151a7287a77c842ba9d9b80688c5cc3b2fe82 Jan 30 08:29:56 crc kubenswrapper[4758]: W0130 08:29:56.262867 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-b8b890c49302c135c19fd211a98b7e8bd09222e76a899889c040ce22212da39a WatchSource:0}: Error finding container b8b890c49302c135c19fd211a98b7e8bd09222e76a899889c040ce22212da39a: Status 404 returned error can't find the container with id b8b890c49302c135c19fd211a98b7e8bd09222e76a899889c040ce22212da39a Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.265338 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:29:56 crc kubenswrapper[4758]: W0130 08:29:56.268206 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e2ae00b75e7e004758e53f0ca415263dac82585858464dc3fc1c6c3e05eb51ab WatchSource:0}: Error finding container e2ae00b75e7e004758e53f0ca415263dac82585858464dc3fc1c6c3e05eb51ab: Status 404 returned error can't find the container with id e2ae00b75e7e004758e53f0ca415263dac82585858464dc3fc1c6c3e05eb51ab Jan 30 08:29:56 crc kubenswrapper[4758]: E0130 08:29:56.285005 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="800ms" Jan 30 08:29:56 crc kubenswrapper[4758]: W0130 08:29:56.293579 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-03cef4eff3ec3480704c9b03cac2fa080e98f7cb8a26420f9a9dcbf29e4b9a08 WatchSource:0}: Error finding container 03cef4eff3ec3480704c9b03cac2fa080e98f7cb8a26420f9a9dcbf29e4b9a08: Status 404 returned error can't find the container with id 03cef4eff3ec3480704c9b03cac2fa080e98f7cb8a26420f9a9dcbf29e4b9a08 Jan 30 08:29:56 crc kubenswrapper[4758]: W0130 08:29:56.299333 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ba7753780cd9d3f1d0d35342c4024ad88f573b9601c0930447da2cc6ceeec2b8 WatchSource:0}: Error finding container ba7753780cd9d3f1d0d35342c4024ad88f573b9601c0930447da2cc6ceeec2b8: Status 404 returned error can't find the container with id ba7753780cd9d3f1d0d35342c4024ad88f573b9601c0930447da2cc6ceeec2b8 Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.547288 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.548421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.548444 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.548452 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.548471 4758 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 08:29:56 crc kubenswrapper[4758]: E0130 08:29:56.548799 4758 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.176:6443: connect: connection refused" node="crc" Jan 30 08:29:56 crc kubenswrapper[4758]: W0130 08:29:56.589059 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Jan 30 08:29:56 crc kubenswrapper[4758]: E0130 08:29:56.589139 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.680846 4758 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.681889 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:25:46.036320085 +0000 UTC Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.770639 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e2ae00b75e7e004758e53f0ca415263dac82585858464dc3fc1c6c3e05eb51ab"} Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.771617 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"76707601e6148eee25570d2111c151a7287a77c842ba9d9b80688c5cc3b2fe82"} Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.774559 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b8b890c49302c135c19fd211a98b7e8bd09222e76a899889c040ce22212da39a"} Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.775424 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ba7753780cd9d3f1d0d35342c4024ad88f573b9601c0930447da2cc6ceeec2b8"} Jan 30 08:29:56 crc kubenswrapper[4758]: I0130 08:29:56.777140 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"03cef4eff3ec3480704c9b03cac2fa080e98f7cb8a26420f9a9dcbf29e4b9a08"} Jan 30 08:29:56 crc kubenswrapper[4758]: W0130 08:29:56.880752 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Jan 30 08:29:56 crc kubenswrapper[4758]: E0130 08:29:56.880813 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:29:56 crc kubenswrapper[4758]: W0130 08:29:56.955821 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Jan 30 08:29:56 crc kubenswrapper[4758]: E0130 08:29:56.956194 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:29:57 crc kubenswrapper[4758]: E0130 08:29:57.086492 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="1.6s" Jan 30 08:29:57 crc kubenswrapper[4758]: W0130 08:29:57.169793 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Jan 30 08:29:57 crc kubenswrapper[4758]: E0130 08:29:57.169882 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.349549 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.351107 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.351146 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.351157 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.351182 4758 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 08:29:57 crc kubenswrapper[4758]: E0130 08:29:57.351677 4758 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.176:6443: connect: connection refused" node="crc" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.571463 4758 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 08:29:57 crc kubenswrapper[4758]: E0130 08:29:57.573337 4758 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.680642 4758 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.682599 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 19:04:52.760698729 +0000 UTC Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.781440 4758 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2" exitCode=0 Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.781583 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.781581 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2"} Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.782588 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.782626 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.782639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.788636 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb"} Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.788680 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0"} Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.788695 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208"} Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.790865 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68" exitCode=0 Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.790910 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68"} Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.790973 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.792092 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.792130 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.792143 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.792644 4758 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8" exitCode=0 Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.792666 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8"} Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.792783 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.793874 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.793917 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.793933 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.794494 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.794754 4758 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6" exitCode=0 Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.794788 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6"} Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.794822 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.796384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.796411 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.796424 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.796534 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.796556 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:57 crc kubenswrapper[4758]: I0130 08:29:57.796567 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:58 crc kubenswrapper[4758]: W0130 08:29:58.548137 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Jan 30 08:29:58 crc kubenswrapper[4758]: E0130 08:29:58.548225 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.681305 4758 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.687781 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:43:13.005536558 +0000 UTC Jan 30 08:29:58 crc kubenswrapper[4758]: E0130 08:29:58.688184 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="3.2s" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.799290 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52"} Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.799359 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.800274 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.800309 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.800318 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.803132 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f"} Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.803185 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7"} Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.803199 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0"} Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.803211 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405"} Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.803223 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4"} Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.803238 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.803993 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.804022 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.804048 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.804722 4758 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095" exitCode=0 Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.804792 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095"} Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.804857 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.809286 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.809479 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.809552 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.812479 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4b50869d10f4f4f4973ade69dd2e55d54e956644a0bf21aebdca1d742570dff9"} Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.812514 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.814501 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.814642 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.814707 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.818669 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e"} Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.818705 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119"} Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.818715 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9"} Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.818790 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.819381 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.819402 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.819410 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.952473 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.953575 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.953607 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.953615 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:58 crc kubenswrapper[4758]: I0130 08:29:58.953636 4758 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 08:29:58 crc kubenswrapper[4758]: E0130 08:29:58.953983 4758 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.176:6443: connect: connection refused" node="crc" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.133172 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.133509 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.133562 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.688444 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:58:50.33838017 +0000 UTC Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.824209 4758 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989" exitCode=0 Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.824325 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989"} Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.824390 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.824344 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.824412 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.824348 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.824450 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.824453 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.824465 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.829073 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.829107 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.829119 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.829150 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.829171 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.829189 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.829197 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.829209 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.829221 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.829279 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.829301 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.829212 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.829161 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.829363 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.829378 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:29:59 crc kubenswrapper[4758]: I0130 08:29:59.838139 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:30:00 crc kubenswrapper[4758]: I0130 08:30:00.056355 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:30:00 crc kubenswrapper[4758]: I0130 08:30:00.638393 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:30:00 crc kubenswrapper[4758]: I0130 08:30:00.688942 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 16:49:06.922512453 +0000 UTC Jan 30 08:30:00 crc kubenswrapper[4758]: I0130 08:30:00.772521 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:30:00 crc kubenswrapper[4758]: I0130 08:30:00.830958 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b"} Jan 30 08:30:00 crc kubenswrapper[4758]: I0130 08:30:00.831074 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:30:00 crc kubenswrapper[4758]: I0130 08:30:00.831005 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:30:00 crc kubenswrapper[4758]: I0130 08:30:00.831082 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757"} Jan 30 08:30:00 crc kubenswrapper[4758]: I0130 08:30:00.831178 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7"} Jan 30 08:30:00 crc kubenswrapper[4758]: I0130 08:30:00.831204 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8"} Jan 30 08:30:00 crc kubenswrapper[4758]: I0130 08:30:00.832075 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:00 crc kubenswrapper[4758]: I0130 08:30:00.832103 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:00 crc kubenswrapper[4758]: I0130 08:30:00.832112 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:00 crc kubenswrapper[4758]: I0130 08:30:00.832417 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:00 crc kubenswrapper[4758]: I0130 08:30:00.832479 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:00 crc kubenswrapper[4758]: I0130 08:30:00.832498 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:01 crc kubenswrapper[4758]: I0130 08:30:01.669453 4758 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 08:30:01 crc kubenswrapper[4758]: I0130 08:30:01.689267 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:05:59.33854985 +0000 UTC Jan 30 08:30:01 crc kubenswrapper[4758]: I0130 08:30:01.841439 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202"} Jan 30 08:30:01 crc kubenswrapper[4758]: I0130 08:30:01.841602 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:30:01 crc kubenswrapper[4758]: I0130 08:30:01.841662 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:30:01 crc kubenswrapper[4758]: I0130 08:30:01.841662 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:30:01 crc kubenswrapper[4758]: I0130 08:30:01.843647 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:01 crc kubenswrapper[4758]: I0130 08:30:01.843694 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:01 crc kubenswrapper[4758]: I0130 08:30:01.843714 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:01 crc kubenswrapper[4758]: I0130 08:30:01.843801 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:01 crc kubenswrapper[4758]: I0130 08:30:01.843852 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:01 crc kubenswrapper[4758]: I0130 08:30:01.843876 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:01 crc kubenswrapper[4758]: I0130 08:30:01.844194 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:01 crc kubenswrapper[4758]: I0130 08:30:01.844240 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:01 crc kubenswrapper[4758]: I0130 08:30:01.844264 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:02 crc kubenswrapper[4758]: I0130 08:30:02.078931 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:30:02 crc kubenswrapper[4758]: I0130 08:30:02.079203 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:30:02 crc kubenswrapper[4758]: I0130 08:30:02.080959 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:02 crc kubenswrapper[4758]: I0130 08:30:02.081011 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:02 crc kubenswrapper[4758]: I0130 08:30:02.081029 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:02 crc kubenswrapper[4758]: I0130 08:30:02.155008 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:30:02 crc kubenswrapper[4758]: I0130 08:30:02.156893 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:02 crc kubenswrapper[4758]: I0130 08:30:02.156968 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:02 crc kubenswrapper[4758]: I0130 08:30:02.156985 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:02 crc kubenswrapper[4758]: I0130 08:30:02.157025 4758 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 08:30:02 crc kubenswrapper[4758]: I0130 08:30:02.689676 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:08:14.680213867 +0000 UTC Jan 30 08:30:02 crc kubenswrapper[4758]: I0130 08:30:02.846351 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:30:02 crc kubenswrapper[4758]: I0130 08:30:02.850016 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:02 crc kubenswrapper[4758]: I0130 08:30:02.850102 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:02 crc kubenswrapper[4758]: I0130 08:30:02.850122 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:03 crc kubenswrapper[4758]: I0130 08:30:03.056797 4758 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 08:30:03 crc kubenswrapper[4758]: I0130 08:30:03.056933 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 08:30:03 crc kubenswrapper[4758]: I0130 08:30:03.690563 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:28:10.711347485 +0000 UTC Jan 30 08:30:03 crc kubenswrapper[4758]: I0130 08:30:03.865842 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 30 08:30:03 crc kubenswrapper[4758]: I0130 08:30:03.866181 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:30:03 crc kubenswrapper[4758]: I0130 08:30:03.867919 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:03 crc kubenswrapper[4758]: I0130 08:30:03.867974 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:03 crc kubenswrapper[4758]: I0130 08:30:03.867987 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:04 crc kubenswrapper[4758]: I0130 08:30:04.691103 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 04:46:59.377251042 +0000 UTC Jan 30 08:30:05 crc kubenswrapper[4758]: I0130 08:30:05.691520 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 04:47:27.455851873 +0000 UTC Jan 30 08:30:05 crc kubenswrapper[4758]: E0130 08:30:05.849531 4758 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 08:30:06 crc kubenswrapper[4758]: I0130 08:30:06.037001 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:30:06 crc kubenswrapper[4758]: I0130 08:30:06.037473 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:30:06 crc kubenswrapper[4758]: I0130 08:30:06.039667 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:06 crc kubenswrapper[4758]: I0130 08:30:06.039734 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:06 crc kubenswrapper[4758]: I0130 08:30:06.039754 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:06 crc kubenswrapper[4758]: I0130 08:30:06.045000 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:30:06 crc kubenswrapper[4758]: I0130 08:30:06.691809 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:40:56.660719464 +0000 UTC Jan 30 08:30:06 crc kubenswrapper[4758]: I0130 08:30:06.855803 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:30:06 crc kubenswrapper[4758]: I0130 08:30:06.857415 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:06 crc kubenswrapper[4758]: I0130 08:30:06.857491 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:06 crc kubenswrapper[4758]: I0130 08:30:06.857510 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:07 crc kubenswrapper[4758]: I0130 08:30:07.545569 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 08:30:07 crc kubenswrapper[4758]: I0130 08:30:07.545733 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:30:07 crc kubenswrapper[4758]: I0130 08:30:07.547149 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:07 crc kubenswrapper[4758]: I0130 08:30:07.547179 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:07 crc kubenswrapper[4758]: I0130 08:30:07.547186 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:07 crc kubenswrapper[4758]: I0130 08:30:07.692670 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:55:02.272566359 +0000 UTC Jan 30 08:30:08 crc kubenswrapper[4758]: I0130 08:30:08.693365 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 17:36:23.837400783 +0000 UTC Jan 30 08:30:09 crc kubenswrapper[4758]: W0130 08:30:09.483651 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 30 08:30:09 crc kubenswrapper[4758]: I0130 08:30:09.483820 4758 trace.go:236] Trace[1803682412]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 08:29:59.481) (total time: 10002ms): Jan 30 08:30:09 crc kubenswrapper[4758]: Trace[1803682412]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (08:30:09.483) Jan 30 08:30:09 crc kubenswrapper[4758]: Trace[1803682412]: [10.002166965s] [10.002166965s] END Jan 30 08:30:09 crc kubenswrapper[4758]: E0130 08:30:09.483862 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 30 08:30:09 crc kubenswrapper[4758]: I0130 08:30:09.628515 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 08:30:09 crc kubenswrapper[4758]: I0130 08:30:09.628620 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 08:30:09 crc kubenswrapper[4758]: I0130 08:30:09.635028 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 08:30:09 crc kubenswrapper[4758]: I0130 08:30:09.635140 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 08:30:09 crc kubenswrapper[4758]: I0130 08:30:09.693566 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 19:28:13.547220333 +0000 UTC Jan 30 08:30:10 crc kubenswrapper[4758]: I0130 08:30:10.644514 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:30:10 crc kubenswrapper[4758]: I0130 08:30:10.644706 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:30:10 crc kubenswrapper[4758]: I0130 08:30:10.646002 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:10 crc kubenswrapper[4758]: I0130 08:30:10.646066 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:10 crc kubenswrapper[4758]: I0130 08:30:10.646084 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:10 crc kubenswrapper[4758]: I0130 08:30:10.694548 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:40:20.100500323 +0000 UTC Jan 30 08:30:11 crc kubenswrapper[4758]: I0130 08:30:11.695485 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:41:34.552238583 +0000 UTC Jan 30 08:30:12 crc kubenswrapper[4758]: I0130 08:30:12.696440 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 20:43:16.694975075 +0000 UTC Jan 30 08:30:13 crc kubenswrapper[4758]: I0130 08:30:13.056741 4758 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 08:30:13 crc kubenswrapper[4758]: I0130 08:30:13.056838 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 08:30:13 crc kubenswrapper[4758]: I0130 08:30:13.697356 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 22:37:51.824159596 +0000 UTC Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.140949 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.141281 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.142934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.143002 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.143021 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.149242 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.328791 4758 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.605216 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.608158 4758 trace.go:236] Trace[1190950679]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 08:30:00.326) (total time: 14281ms): Jan 30 08:30:14 crc kubenswrapper[4758]: Trace[1190950679]: ---"Objects listed" error: 14281ms (08:30:14.608) Jan 30 08:30:14 crc kubenswrapper[4758]: Trace[1190950679]: [14.281938403s] [14.281938403s] END Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.608194 4758 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.610164 4758 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.610243 4758 trace.go:236] Trace[79865880]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 08:30:02.971) (total time: 11638ms): Jan 30 08:30:14 crc kubenswrapper[4758]: Trace[79865880]: ---"Objects listed" error: 11638ms (08:30:14.610) Jan 30 08:30:14 crc kubenswrapper[4758]: Trace[79865880]: [11.638615775s] [11.638615775s] END Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.610276 4758 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.611890 4758 trace.go:236] Trace[2019750266]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 08:29:59.615) (total time: 14996ms): Jan 30 08:30:14 crc kubenswrapper[4758]: Trace[2019750266]: ---"Objects listed" error: 14996ms (08:30:14.611) Jan 30 08:30:14 crc kubenswrapper[4758]: Trace[2019750266]: [14.996735875s] [14.996735875s] END Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.611937 4758 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.613463 4758 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.626415 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53038->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.626506 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53038->192.168.126.11:17697: read: connection reset by peer" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.626944 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.626989 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.651393 4758 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.678733 4758 apiserver.go:52] "Watching apiserver" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.682800 4758 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.683257 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.683919 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.684210 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.684204 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.684250 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.684514 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.684971 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.685117 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.685250 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.685884 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.687777 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.688000 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.688476 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.689969 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.690381 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.690736 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.690796 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.691692 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.692235 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.697518 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 07:55:26.624337557 +0000 UTC Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.710733 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.710790 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.710828 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.711762 4758 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.718908 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.721992 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.736457 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.743533 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.751648 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.764104 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.778271 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.784451 4758 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.797081 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.811226 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.811316 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.811767 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.811884 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.811964 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812079 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812167 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812277 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812355 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812439 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812543 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812637 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812279 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812764 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812291 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812447 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812722 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812832 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812852 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812869 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812888 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812923 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812940 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812956 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812971 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813005 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813023 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812487 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812589 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812653 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812690 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813051 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813070 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813127 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813142 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813162 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813161 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813179 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813207 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813249 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813280 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813310 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813303 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813349 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813377 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813400 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813424 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813447 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813448 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813477 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813522 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813571 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813582 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813595 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813621 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813647 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813669 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813695 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813720 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813745 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813791 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813814 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813837 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813862 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813886 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813911 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813933 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813994 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814023 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814067 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814091 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814114 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814160 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814183 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814205 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814229 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814249 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814271 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814293 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814313 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814334 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814358 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814380 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814399 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814423 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814443 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814465 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814486 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814507 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814531 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814553 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814577 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814600 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814623 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814644 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814666 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814688 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814710 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814733 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814757 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814778 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814798 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814820 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814842 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814862 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814884 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814907 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814930 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814954 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814979 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815001 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815026 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815067 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815090 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815113 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815136 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815158 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815180 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815202 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815225 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815246 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815269 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815291 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815313 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815335 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815360 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815382 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815405 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815429 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815453 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815477 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815507 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815532 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815554 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815577 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815598 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815622 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815648 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815672 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815694 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815743 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815766 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815794 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815816 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815838 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815862 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815883 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815906 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815930 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815952 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815974 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815997 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816020 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816060 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816083 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816106 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816130 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816153 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816238 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816262 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816287 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816313 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816336 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816360 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816384 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813668 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816462 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813706 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813829 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813843 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813980 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814002 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814152 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814192 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814246 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814293 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814337 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814546 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814617 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814683 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814688 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814742 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814885 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814935 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815120 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815875 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816200 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816276 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816627 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816402 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816831 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816996 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.818018 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.818437 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.818769 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.818785 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.818892 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.819496 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.819651 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.819992 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.820227 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.820613 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821012 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821269 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821482 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821212 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821659 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821906 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821935 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821938 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821970 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.822072 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.822443 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.822471 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.822851 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.823322 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.825217 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.825230 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.825358 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.825563 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.825664 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.825816 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.825994 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.826000 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.826297 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.827076 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.827419 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.827523 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.827532 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.827589 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.828113 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.827614 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.827687 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.827990 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.828353 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.828415 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.828549 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.828627 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.828694 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.828765 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.828994 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.829278 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.829386 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.829701 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.829828 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.829992 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.830164 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.830421 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.830630 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.830555 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.830824 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.830916 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.831088 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.831107 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.831288 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.831369 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.831499 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.831675 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.831894 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.832023 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.832476 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.832666 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.832867 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.833075 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.833159 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:30:15.333140272 +0000 UTC m=+20.305451823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.833384 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.833398 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.833597 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.833683 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.833767 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816409 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.833937 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.833997 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834024 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834060 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834080 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834117 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834151 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834235 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834244 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834371 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834402 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834424 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834464 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834516 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834536 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834553 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834573 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834616 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834717 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834743 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834763 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834782 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834800 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834847 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834883 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834900 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834925 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834958 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834963 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835015 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835049 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835069 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835113 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835213 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835254 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835274 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835274 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835297 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835317 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835337 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.836551 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.836628 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.836927 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.836971 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.837204 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.837221 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.837226 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.837301 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.837466 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.837588 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.837762 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.838438 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.838498 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.839807 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.839867 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.839934 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.839976 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.840296 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.840475 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.840499 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.842312 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.842352 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.842383 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.843430 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.842567 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.842720 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.842197 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.843084 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.843337 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.843588 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.844616 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.844696 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.844718 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.845346 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.845555 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.845552 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.845919 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.846060 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.846240 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.846414 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.846616 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.846721 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.846776 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.846819 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.846941 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847135 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847396 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847696 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847779 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847816 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847852 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847875 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847901 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847928 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847967 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847996 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848020 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848120 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848153 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848181 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848203 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848256 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848280 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848298 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848376 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848801 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848835 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848859 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848895 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849394 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849422 4758 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849436 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849448 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849463 4758 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849475 4758 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849487 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849507 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849518 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849529 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849540 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849552 4758 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849563 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849573 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849583 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849624 4758 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849635 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849646 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849660 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849671 4758 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849683 4758 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849693 4758 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849707 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849717 4758 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849728 4758 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849738 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849750 4758 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849760 4758 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849769 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849778 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849790 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849801 4758 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849811 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849824 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849834 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849844 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849854 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849867 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849876 4758 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849886 4758 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849896 4758 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849908 4758 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849917 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849926 4758 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849937 4758 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849947 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849955 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849979 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849991 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850002 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850011 4758 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850109 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850126 4758 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850136 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850146 4758 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850187 4758 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850200 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850223 4758 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850233 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850246 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850255 4758 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850264 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850274 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850287 4758 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850298 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850308 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850332 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850344 4758 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850370 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850378 4758 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850387 4758 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850399 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850408 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850417 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850428 4758 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850437 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850446 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850455 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850467 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850480 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850488 4758 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850497 4758 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850510 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850518 4758 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850528 4758 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850552 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850562 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850596 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850605 4758 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850616 4758 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850626 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850635 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850646 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850659 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850669 4758 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850689 4758 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850711 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850723 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850734 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850744 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850757 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850769 4758 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850778 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850786 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850814 4758 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850823 4758 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850833 4758 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850850 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850863 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850872 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850880 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850893 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850903 4758 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850911 4758 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850919 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850930 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850939 4758 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850947 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850967 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850979 4758 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850998 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851007 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851015 4758 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851378 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851408 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851421 4758 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849666 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851632 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851779 4758 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851804 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851820 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849835 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850056 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851158 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851843 4758 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851978 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852001 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852024 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852074 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852089 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852102 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852119 4758 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852133 4758 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852150 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852167 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852180 4758 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852194 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852206 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852219 4758 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852230 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852243 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852289 4758 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852313 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852325 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852339 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852356 4758 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.852364 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852376 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852389 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852400 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852436 4758 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.852468 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:15.352424239 +0000 UTC m=+20.324735790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852501 4758 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852518 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852540 4758 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852556 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852570 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852587 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852601 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852614 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852630 4758 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852647 4758 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852660 4758 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852682 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852696 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852720 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852729 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852826 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.853600 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.854616 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.854682 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:15.354663981 +0000 UTC m=+20.326975532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.856397 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.856634 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.861910 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.863545 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.864015 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.864879 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.864909 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.865405 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.865428 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.866269 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.866474 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.866510 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.866522 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.866538 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.866556 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.866588 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:15.366571868 +0000 UTC m=+20.338883419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.866948 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.867414 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.868290 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.868532 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.867628 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.870030 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.870331 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.870499 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.873940 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.873968 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.873985 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.874063 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:15.374022351 +0000 UTC m=+20.346334082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.874817 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.880081 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.881506 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f" exitCode=255 Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.881546 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f"} Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.881893 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.896787 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.896940 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.900653 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.901434 4758 scope.go:117] "RemoveContainer" containerID="0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.901973 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.908506 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.917985 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.927921 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.940663 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.947972 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953562 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953644 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953647 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953658 4758 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953726 4758 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953738 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953751 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953760 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953770 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953780 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953790 4758 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953800 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953811 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953821 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953832 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953841 4758 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953850 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953860 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953868 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953878 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953887 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953896 4758 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953905 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953915 4758 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953925 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.010247 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.031980 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.041779 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.092619 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-68b4e7453a969bba0a383cfd9165dca79b45560c81537ffa3eedf0cda0e956c7 WatchSource:0}: Error finding container 68b4e7453a969bba0a383cfd9165dca79b45560c81537ffa3eedf0cda0e956c7: Status 404 returned error can't find the container with id 68b4e7453a969bba0a383cfd9165dca79b45560c81537ffa3eedf0cda0e956c7 Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.252709 4758 csr.go:261] certificate signing request csr-7mjk5 is approved, waiting to be issued Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.334765 4758 csr.go:257] certificate signing request csr-7mjk5 is issued Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.357627 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.357806 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.357845 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.358029 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.358143 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:16.358116387 +0000 UTC m=+21.330427938 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.358703 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:30:16.358689857 +0000 UTC m=+21.331001408 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.358771 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.358808 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:16.358800071 +0000 UTC m=+21.331111622 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.459238 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.459294 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.459417 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.459438 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.459450 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.459502 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:16.459486002 +0000 UTC m=+21.431797553 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.459500 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.459565 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.459579 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.459667 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:16.459639838 +0000 UTC m=+21.431951389 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.523559 4758 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.523796 4758 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.523850 4758 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.523896 4758 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.523914 4758 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.523937 4758 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.524005 4758 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.524060 4758 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.524017 4758 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.523946 4758 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.524089 4758 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.523935 4758 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.524184 4758 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.697773 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 05:43:01.535917286 +0000 UTC Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.772112 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.772600 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.773393 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.774032 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.774672 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.775142 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.775819 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.776359 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.777086 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.777609 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.778201 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.778952 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.779584 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.779838 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.780224 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.780784 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.781304 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.781811 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.785195 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.785803 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.786406 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.787195 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.787745 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.788536 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.789164 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.789553 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.790543 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.791519 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.791988 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.792549 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.793397 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.793887 4758 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.793996 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.795811 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.796202 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.796766 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.797236 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.798837 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.799812 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.800301 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.801304 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.801905 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.802739 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.803403 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.804531 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.805663 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.806268 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.806829 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.807287 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.807871 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.808759 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.809245 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.809852 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.810408 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.810949 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.811511 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.811966 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.818494 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.830341 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.844380 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.880404 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.885623 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443"} Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.885667 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bf62cd754a9c3e6b128b325ee021c38fc373360c5f1227f9f8977b095ece7486"} Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.887685 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.888835 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4"} Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.889308 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.889976 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"68b4e7453a969bba0a383cfd9165dca79b45560c81537ffa3eedf0cda0e956c7"} Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.891483 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5"} Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.891513 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62"} Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.891526 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f0f5c8299bcfac4e97210a3a90338e5a831f4514e1bed329ee68866870bd2a82"} Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.909682 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.948821 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.976066 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.023785 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.053359 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.084472 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.105180 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.108574 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-8z796"] Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.108891 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8z796" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.111718 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.111869 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.111927 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.139329 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.153715 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.185120 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.201624 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.219233 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.234349 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.255084 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.266760 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.266860 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/452effb5-a499-4c47-a71d-12198ffa37c8-hosts-file\") pod \"node-resolver-8z796\" (UID: \"452effb5-a499-4c47-a71d-12198ffa37c8\") " pod="openshift-dns/node-resolver-8z796" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.266973 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w4ck\" (UniqueName: \"kubernetes.io/projected/452effb5-a499-4c47-a71d-12198ffa37c8-kube-api-access-7w4ck\") pod \"node-resolver-8z796\" (UID: \"452effb5-a499-4c47-a71d-12198ffa37c8\") " pod="openshift-dns/node-resolver-8z796" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.289273 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.308628 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.326281 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.336409 4758 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 08:25:15 +0000 UTC, rotation deadline is 2026-11-10 21:34:23.897824121 +0000 UTC Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.336460 4758 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6829h4m7.561366689s for next certificate rotation Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.342813 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.355541 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.367677 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.367755 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.367784 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.367806 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w4ck\" (UniqueName: \"kubernetes.io/projected/452effb5-a499-4c47-a71d-12198ffa37c8-kube-api-access-7w4ck\") pod \"node-resolver-8z796\" (UID: \"452effb5-a499-4c47-a71d-12198ffa37c8\") " pod="openshift-dns/node-resolver-8z796" Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.367844 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:30:18.367813776 +0000 UTC m=+23.340125327 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.367903 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/452effb5-a499-4c47-a71d-12198ffa37c8-hosts-file\") pod \"node-resolver-8z796\" (UID: \"452effb5-a499-4c47-a71d-12198ffa37c8\") " pod="openshift-dns/node-resolver-8z796" Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.367937 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.367983 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.368006 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:18.367988812 +0000 UTC m=+23.340300473 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.368132 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/452effb5-a499-4c47-a71d-12198ffa37c8-hosts-file\") pod \"node-resolver-8z796\" (UID: \"452effb5-a499-4c47-a71d-12198ffa37c8\") " pod="openshift-dns/node-resolver-8z796" Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.368193 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:18.368056944 +0000 UTC m=+23.340368495 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.374734 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.384808 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.389320 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.389855 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w4ck\" (UniqueName: \"kubernetes.io/projected/452effb5-a499-4c47-a71d-12198ffa37c8-kube-api-access-7w4ck\") pod \"node-resolver-8z796\" (UID: \"452effb5-a499-4c47-a71d-12198ffa37c8\") " pod="openshift-dns/node-resolver-8z796" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.419677 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8z796" Jan 30 08:30:16 crc kubenswrapper[4758]: W0130 08:30:16.429424 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod452effb5_a499_4c47_a71d_12198ffa37c8.slice/crio-913999e4fdd4ebe5123fb82db5787435f7d66da71dd5762eb7f1ff9f08e6552b WatchSource:0}: Error finding container 913999e4fdd4ebe5123fb82db5787435f7d66da71dd5762eb7f1ff9f08e6552b: Status 404 returned error can't find the container with id 913999e4fdd4ebe5123fb82db5787435f7d66da71dd5762eb7f1ff9f08e6552b Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.434995 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.468404 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.468591 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.468764 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.468803 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.468816 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.468871 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:18.468856514 +0000 UTC m=+23.441168065 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.469097 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.469207 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.469299 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.469423 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:18.469404381 +0000 UTC m=+23.441715932 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.504744 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-2nkwx"] Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.505128 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d2cb9"] Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.505271 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.505947 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-6t8nj"] Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.506284 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.506946 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-99ddw"] Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.507200 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.507461 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.514037 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.520922 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.521115 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.521128 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.521301 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.521336 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.521785 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.521787 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.522293 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.522307 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.522404 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.522811 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.522817 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.523088 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.523232 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.523446 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.523590 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.527060 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.536613 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.560307 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.580228 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.607201 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.612375 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.643392 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.659974 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673262 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-netns\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673311 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m99tv\" (UniqueName: \"kubernetes.io/projected/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-kube-api-access-m99tv\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673337 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-netd\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673352 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovn-node-metrics-cert\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673368 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cni-binary-copy\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673382 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673397 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-daemon-config\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673413 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95cfcde3-10c8-4ece-a78a-9508f04a0f09-mcd-auth-proxy-config\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673491 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673527 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-ovn-kubernetes\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673546 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-cni-bin\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673574 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/95cfcde3-10c8-4ece-a78a-9508f04a0f09-rootfs\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673589 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-config\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673606 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-cni-multus\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673621 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwcdw\" (UniqueName: \"kubernetes.io/projected/95cfcde3-10c8-4ece-a78a-9508f04a0f09-kube-api-access-zwcdw\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673638 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-systemd-units\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673655 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-env-overrides\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673676 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-conf-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673697 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673713 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-cni-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673731 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-os-release\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673755 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-node-log\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673796 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-cnibin\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673832 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-multus-certs\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673890 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-k8s-cni-cncf-io\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673925 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95cfcde3-10c8-4ece-a78a-9508f04a0f09-proxy-tls\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673944 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-kubelet\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673963 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-etc-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673981 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-hostroot\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674004 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-system-cni-dir\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674026 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cnibin\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674061 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674080 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-systemd\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674096 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-system-cni-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674111 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-etc-kubernetes\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674138 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-bin\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674160 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmdj9\" (UniqueName: \"kubernetes.io/projected/a682aa56-1a48-46dd-a06c-8cbaaeea7008-kube-api-access-jmdj9\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674187 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-var-lib-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674204 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-log-socket\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674224 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-socket-dir-parent\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674239 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-kubelet\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674256 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-ovn\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674272 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-script-lib\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674289 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-os-release\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674303 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fac75e9c-fc94-4c83-8613-bce0f4744079-cni-binary-copy\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674321 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cznc6\" (UniqueName: \"kubernetes.io/projected/fac75e9c-fc94-4c83-8613-bce0f4744079-kube-api-access-cznc6\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674336 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-slash\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674351 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-netns\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.676450 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.697407 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.698241 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 01:02:47.129309217 +0000 UTC Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.698569 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.708997 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.723730 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.737232 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.752149 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.767889 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.767915 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.767953 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.768029 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.768153 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.768216 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.770109 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774784 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cznc6\" (UniqueName: \"kubernetes.io/projected/fac75e9c-fc94-4c83-8613-bce0f4744079-kube-api-access-cznc6\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774818 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-ovn\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774838 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-script-lib\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774855 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-os-release\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774872 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fac75e9c-fc94-4c83-8613-bce0f4744079-cni-binary-copy\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774887 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-slash\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774906 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-netns\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774923 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovn-node-metrics-cert\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774939 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-netns\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774968 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m99tv\" (UniqueName: \"kubernetes.io/projected/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-kube-api-access-m99tv\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774983 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-netd\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774999 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cni-binary-copy\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775015 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775037 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-daemon-config\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775075 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95cfcde3-10c8-4ece-a78a-9508f04a0f09-mcd-auth-proxy-config\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775094 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775108 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-ovn-kubernetes\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775133 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-cni-bin\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775150 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/95cfcde3-10c8-4ece-a78a-9508f04a0f09-rootfs\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775166 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-config\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775182 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-cni-multus\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775197 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwcdw\" (UniqueName: \"kubernetes.io/projected/95cfcde3-10c8-4ece-a78a-9508f04a0f09-kube-api-access-zwcdw\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775213 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-systemd-units\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775229 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-env-overrides\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775244 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-conf-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775263 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775280 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-cni-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775300 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-os-release\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775321 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-node-log\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775335 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-ovn\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775386 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-netd\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775400 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-cnibin\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776145 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-script-lib\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776197 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-netns\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776268 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-os-release\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776309 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-slash\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776332 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-netns\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775340 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-cnibin\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776882 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-k8s-cni-cncf-io\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776912 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cni-binary-copy\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776998 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-k8s-cni-cncf-io\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776977 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-multus-certs\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776751 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776927 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-multus-certs\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777093 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777134 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-systemd-units\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777150 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95cfcde3-10c8-4ece-a78a-9508f04a0f09-proxy-tls\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777179 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-kubelet\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777205 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-etc-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777227 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-system-cni-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777248 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-hostroot\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777273 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-system-cni-dir\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777288 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-daemon-config\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777319 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cnibin\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777295 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cnibin\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776757 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fac75e9c-fc94-4c83-8613-bce0f4744079-cni-binary-copy\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777363 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777394 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-systemd\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777417 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-etc-kubernetes\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777428 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95cfcde3-10c8-4ece-a78a-9508f04a0f09-mcd-auth-proxy-config\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777456 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-bin\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777468 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777479 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmdj9\" (UniqueName: \"kubernetes.io/projected/a682aa56-1a48-46dd-a06c-8cbaaeea7008-kube-api-access-jmdj9\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777495 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-ovn-kubernetes\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777507 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-var-lib-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777523 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-cni-bin\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777526 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-log-socket\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777549 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-log-socket\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777560 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-socket-dir-parent\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777582 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-kubelet\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777613 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-cni-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777639 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-kubelet\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777661 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-os-release\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777667 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/95cfcde3-10c8-4ece-a78a-9508f04a0f09-rootfs\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777696 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-node-log\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777712 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-env-overrides\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777760 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-conf-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777819 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-bin\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777862 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-systemd\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777894 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-etc-kubernetes\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777911 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-system-cni-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777944 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-kubelet\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777970 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-etc-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.778007 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-hostroot\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.778036 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-var-lib-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.778115 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-config\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.778086 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-cni-multus\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.778171 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-socket-dir-parent\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.778182 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-system-cni-dir\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.778323 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.782488 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovn-node-metrics-cert\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.785151 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95cfcde3-10c8-4ece-a78a-9508f04a0f09-proxy-tls\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.796606 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmdj9\" (UniqueName: \"kubernetes.io/projected/a682aa56-1a48-46dd-a06c-8cbaaeea7008-kube-api-access-jmdj9\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.799015 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.799296 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cznc6\" (UniqueName: \"kubernetes.io/projected/fac75e9c-fc94-4c83-8613-bce0f4744079-kube-api-access-cznc6\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.799625 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwcdw\" (UniqueName: \"kubernetes.io/projected/95cfcde3-10c8-4ece-a78a-9508f04a0f09-kube-api-access-zwcdw\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.805827 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m99tv\" (UniqueName: \"kubernetes.io/projected/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-kube-api-access-m99tv\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.812579 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.820126 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.827369 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.828520 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: W0130 08:30:16.832131 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95cfcde3_10c8_4ece_a78a_9508f04a0f09.slice/crio-1fd75654cade1ab92c0a4d1e8c35ac282ae9ceed0a3f510032777a69b8bb2065 WatchSource:0}: Error finding container 1fd75654cade1ab92c0a4d1e8c35ac282ae9ceed0a3f510032777a69b8bb2065: Status 404 returned error can't find the container with id 1fd75654cade1ab92c0a4d1e8c35ac282ae9ceed0a3f510032777a69b8bb2065 Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.841845 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.842006 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.853223 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.856384 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.871993 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: W0130 08:30:16.872835 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod518ee414_95c2_4ee2_8bef_bd1af1d5afb4.slice/crio-786c58176a857b8fe83287a9c96883fcbb01cd330f36816681653b737bcdd5b8 WatchSource:0}: Error finding container 786c58176a857b8fe83287a9c96883fcbb01cd330f36816681653b737bcdd5b8: Status 404 returned error can't find the container with id 786c58176a857b8fe83287a9c96883fcbb01cd330f36816681653b737bcdd5b8 Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.896198 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"8d70505dbacf380ad755907b0497b938a8a8916ec3d2072e37bb1856843d9c78"} Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.897212 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"1fd75654cade1ab92c0a4d1e8c35ac282ae9ceed0a3f510032777a69b8bb2065"} Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.898257 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.899206 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.901179 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8z796" event={"ID":"452effb5-a499-4c47-a71d-12198ffa37c8","Type":"ContainerStarted","Data":"658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67"} Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.901219 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8z796" event={"ID":"452effb5-a499-4c47-a71d-12198ffa37c8","Type":"ContainerStarted","Data":"913999e4fdd4ebe5123fb82db5787435f7d66da71dd5762eb7f1ff9f08e6552b"} Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.902929 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.905661 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99ddw" event={"ID":"fac75e9c-fc94-4c83-8613-bce0f4744079","Type":"ContainerStarted","Data":"63f3857ee1c13d78c65503cac29c070d58c87da4f89329d3e96e0c89b8e387d4"} Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.915385 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerStarted","Data":"786c58176a857b8fe83287a9c96883fcbb01cd330f36816681653b737bcdd5b8"} Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.919624 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.948628 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.958393 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.965513 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.969134 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.991184 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.007329 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.024459 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.042171 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.062094 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.082628 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.097389 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.115887 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.135244 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.152645 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.172127 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.193427 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.570978 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.585447 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.586238 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.588493 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.601112 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.615715 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.626828 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.644960 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.662068 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.679410 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.697492 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.698416 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:50:59.025007573 +0000 UTC Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.710834 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.723997 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.741269 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.756486 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.779402 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.799482 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.820509 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.838189 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.849486 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.861279 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.872371 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.883923 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.900546 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.910341 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006"} Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.910395 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916"} Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.911657 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697"} Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.912810 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99ddw" event={"ID":"fac75e9c-fc94-4c83-8613-bce0f4744079","Type":"ContainerStarted","Data":"0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb"} Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.914135 4758 generic.go:334] "Generic (PLEG): container finished" podID="518ee414-95c2-4ee2-8bef-bd1af1d5afb4" containerID="8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872" exitCode=0 Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.914390 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerDied","Data":"8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872"} Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.915210 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.916018 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac" exitCode=0 Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.916057 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac"} Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.938188 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.967616 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.000022 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.039159 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.137405 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.155761 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.184533 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.203429 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.248092 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.292839 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.320194 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.361905 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.399480 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.399944 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.400121 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:30:22.400091493 +0000 UTC m=+27.372403044 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.400222 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.400297 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.400388 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.400446 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.400537 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:22.400473205 +0000 UTC m=+27.372784746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.400608 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:22.400598579 +0000 UTC m=+27.372910130 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.443859 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.480973 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.501402 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.501611 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.501612 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.501757 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.501805 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.501819 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.501900 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:22.501882674 +0000 UTC m=+27.474194285 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.501764 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.502067 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.502162 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:22.502148083 +0000 UTC m=+27.474459634 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.517259 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.699117 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 11:30:56.282773384 +0000 UTC Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.768880 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.769072 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.769719 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.769910 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.769729 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.770221 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.922449 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerStarted","Data":"7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a"} Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.927331 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4"} Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.927436 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38"} Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.927452 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3"} Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.927465 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda"} Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.938557 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.953900 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.972342 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.987089 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.005435 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.019223 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.034689 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.061911 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.070106 4758 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.074064 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.095152 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.116052 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.133315 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.150579 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.310130 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lnh2g"] Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.310727 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.313977 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.314748 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.316245 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.318590 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.328028 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.343118 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.358131 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.373966 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.387873 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.406958 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.415355 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/28b9f864-7294-4168-8200-3dbba23ffc97-serviceca\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.415436 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb85r\" (UniqueName: \"kubernetes.io/projected/28b9f864-7294-4168-8200-3dbba23ffc97-kube-api-access-xb85r\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.415619 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28b9f864-7294-4168-8200-3dbba23ffc97-host\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.422188 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.461771 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.501479 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.516919 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/28b9f864-7294-4168-8200-3dbba23ffc97-serviceca\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.516992 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb85r\" (UniqueName: \"kubernetes.io/projected/28b9f864-7294-4168-8200-3dbba23ffc97-kube-api-access-xb85r\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.517031 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28b9f864-7294-4168-8200-3dbba23ffc97-host\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.517130 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28b9f864-7294-4168-8200-3dbba23ffc97-host\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.517996 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/28b9f864-7294-4168-8200-3dbba23ffc97-serviceca\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.543789 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.566636 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb85r\" (UniqueName: \"kubernetes.io/projected/28b9f864-7294-4168-8200-3dbba23ffc97-kube-api-access-xb85r\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.604343 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.639642 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.642736 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: W0130 08:30:19.659154 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28b9f864_7294_4168_8200_3dbba23ffc97.slice/crio-15a4ef68cc35f6421138c11d11513fa3467262e648409f6415d034a367a1711c WatchSource:0}: Error finding container 15a4ef68cc35f6421138c11d11513fa3467262e648409f6415d034a367a1711c: Status 404 returned error can't find the container with id 15a4ef68cc35f6421138c11d11513fa3467262e648409f6415d034a367a1711c Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.681414 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.699922 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:11:20.089865186 +0000 UTC Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.721140 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.934926 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee"} Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.935001 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32"} Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.936592 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lnh2g" event={"ID":"28b9f864-7294-4168-8200-3dbba23ffc97","Type":"ContainerStarted","Data":"df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f"} Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.936654 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lnh2g" event={"ID":"28b9f864-7294-4168-8200-3dbba23ffc97","Type":"ContainerStarted","Data":"15a4ef68cc35f6421138c11d11513fa3467262e648409f6415d034a367a1711c"} Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.938630 4758 generic.go:334] "Generic (PLEG): container finished" podID="518ee414-95c2-4ee2-8bef-bd1af1d5afb4" containerID="7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a" exitCode=0 Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.938660 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerDied","Data":"7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a"} Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.953536 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.968017 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.981327 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.998999 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.013206 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.028780 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.044672 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.063529 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.066811 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.068103 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.072859 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.100814 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.145028 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.187808 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.190096 4758 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.238986 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.277008 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.327282 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.366504 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.411670 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.443999 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.479701 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.518406 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.557759 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.600097 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.639091 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.683188 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.701929 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:58:38.579392301 +0000 UTC Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.718481 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.757021 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.768239 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.768301 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:20 crc kubenswrapper[4758]: E0130 08:30:20.768398 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:20 crc kubenswrapper[4758]: E0130 08:30:20.768435 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.768304 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:20 crc kubenswrapper[4758]: E0130 08:30:20.768520 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.798951 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.845996 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.876971 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.915870 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.942304 4758 generic.go:334] "Generic (PLEG): container finished" podID="518ee414-95c2-4ee2-8bef-bd1af1d5afb4" containerID="f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f" exitCode=0 Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.942491 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerDied","Data":"f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f"} Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.958968 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.000238 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.013856 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.015491 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.015535 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.015544 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.015658 4758 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.038083 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.091086 4758 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.091336 4758 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.092593 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.092618 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.092630 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.092646 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.092657 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: E0130 08:30:21.107685 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.111415 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.111465 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.111475 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.111489 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.111498 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.119904 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: E0130 08:30:21.122743 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.126504 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.126532 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.126539 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.126555 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.126565 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: E0130 08:30:21.137994 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.141372 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.141394 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.141401 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.141415 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.141426 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: E0130 08:30:21.154181 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.157684 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.157704 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.157711 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.157723 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.157732 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.164957 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: E0130 08:30:21.172538 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: E0130 08:30:21.172664 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.174520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.174708 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.174862 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.175030 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.175189 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.198684 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.238694 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.277736 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.277760 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.277767 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.277779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.277790 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.279865 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.326194 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.359934 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.379664 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.379886 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.379950 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.380010 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.380091 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.398669 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.443345 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.482987 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.483067 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.483082 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.483103 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.483115 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.496245 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.521672 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.557610 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.586785 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.586846 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.586861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.586891 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.586909 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.689191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.689230 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.689242 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.689263 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.689275 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.702128 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:46:11.43930709 +0000 UTC Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.792101 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.792310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.792375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.792454 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.792524 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.895143 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.895199 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.895225 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.895248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.895287 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.950355 4758 generic.go:334] "Generic (PLEG): container finished" podID="518ee414-95c2-4ee2-8bef-bd1af1d5afb4" containerID="430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39" exitCode=0 Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.950398 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerDied","Data":"430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.955566 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.970962 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.983522 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.998984 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.999185 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.999224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.999233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.999249 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.999258 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.012568 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.026620 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.041022 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.052666 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.066332 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.091498 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.102423 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.102459 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.102467 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.102482 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.102491 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.113733 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.180821 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.197857 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.204461 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.204618 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.204699 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.204791 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.204869 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.211658 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.232291 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.247929 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.307158 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.307199 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.307210 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.307231 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.307244 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.410365 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.410407 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.410416 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.410622 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.410634 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.450478 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.450749 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:30:30.450715623 +0000 UTC m=+35.423027184 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.451127 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.451250 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.451406 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:30.451362343 +0000 UTC m=+35.423673904 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.451512 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.451590 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.451869 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:30.451844678 +0000 UTC m=+35.424156269 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.514328 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.514379 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.514395 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.514419 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.514436 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.552223 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.552264 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.552416 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.552447 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.552459 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.552458 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.552492 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.552500 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:30.552488074 +0000 UTC m=+35.524799625 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.552510 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.552581 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:30.552557336 +0000 UTC m=+35.524868947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.617286 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.617348 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.617365 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.617397 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.617416 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.704302 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 11:31:42.272569091 +0000 UTC Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.720261 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.720314 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.720326 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.720345 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.720357 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.768022 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.768022 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.768158 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.768213 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.768465 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.768532 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.822182 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.822397 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.822408 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.822423 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.822447 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.924890 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.924943 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.924952 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.924966 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.924974 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.961594 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerStarted","Data":"41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.998546 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.017345 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.028303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.028365 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.028374 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.028399 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.028411 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.035848 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.063702 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.077867 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.091235 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.111370 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.130471 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.132190 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.132233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.132242 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.132257 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.132267 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.157572 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.178883 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.194571 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.209638 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.222221 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.234546 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.234772 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.234927 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.235076 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.235197 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.236627 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.259641 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.337476 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.337645 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.337722 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.337823 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.337897 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.441371 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.441424 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.441440 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.441469 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.441487 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.544533 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.545029 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.545275 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.545422 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.545618 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.653471 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.653555 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.653584 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.653607 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.653620 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.704866 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:30:50.041511205 +0000 UTC Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.781281 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.781349 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.781365 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.781385 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.781400 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.884030 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.884431 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.884545 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.884616 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.884688 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.968803 4758 generic.go:334] "Generic (PLEG): container finished" podID="518ee414-95c2-4ee2-8bef-bd1af1d5afb4" containerID="41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792" exitCode=0 Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.968999 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerDied","Data":"41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.978588 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.980641 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.980737 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.987009 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.987132 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.987153 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.987185 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.987203 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.992478 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.009193 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.028476 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.031677 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.043896 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.060048 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.080294 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.090224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.090269 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.090279 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.090293 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.090304 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.104375 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.117688 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.135621 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.150219 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.164851 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.184908 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.193224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.193277 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.193290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.193306 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.193317 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.197315 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.212745 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.232583 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.250258 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.266838 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.282750 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.295191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.295225 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.295233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.295248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.295259 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.295359 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.306377 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.319861 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.333418 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.352259 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.370431 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.389023 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.397753 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.397789 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.397797 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.397845 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.397858 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.402795 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.413266 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.432198 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.450637 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.464953 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.500346 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.500399 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.500417 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.500438 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.500451 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.606122 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.606225 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.606302 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.606409 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.606509 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.705124 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 06:17:35.121926556 +0000 UTC Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.711295 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.711356 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.711381 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.711409 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.711429 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.768284 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:24 crc kubenswrapper[4758]: E0130 08:30:24.768824 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.768289 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:24 crc kubenswrapper[4758]: E0130 08:30:24.769170 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.768282 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:24 crc kubenswrapper[4758]: E0130 08:30:24.769362 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.815028 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.815097 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.815110 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.815130 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.815145 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.918235 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.918295 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.918312 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.918338 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.918355 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.985287 4758 generic.go:334] "Generic (PLEG): container finished" podID="518ee414-95c2-4ee2-8bef-bd1af1d5afb4" containerID="a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06" exitCode=0 Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.985471 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerDied","Data":"a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.986181 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.004518 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.017801 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.023964 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.024197 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.024259 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.024282 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.024315 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.024341 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.040930 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.059218 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.075386 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.089943 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.103290 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.114352 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.128140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.128211 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.128229 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.128257 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.128278 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.129206 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.143263 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.163005 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.179258 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.199515 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.217826 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.231262 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.231229 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.231291 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.231484 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.231520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.231535 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.242087 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.255316 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.274723 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.286208 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.298424 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.307503 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.319698 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.333897 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.333966 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.333987 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.334010 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.334025 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.337434 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.351727 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.363681 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.375789 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.388317 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.400709 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.416655 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.436450 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.436485 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.436496 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.436517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.436528 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.436631 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.538516 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.538559 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.538569 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.538592 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.538605 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.641371 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.641656 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.641804 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.642135 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.642461 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.706662 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 17:57:02.160233454 +0000 UTC Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.745303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.745447 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.745586 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.745651 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.745754 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.782545 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.798432 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.813438 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.837216 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.848604 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.848639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.848649 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.848664 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.848674 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.863787 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.876663 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.893172 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.915284 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.935796 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.950453 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.952343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.952481 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.952692 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.952806 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.952842 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.965254 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.986184 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.994183 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerStarted","Data":"e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.013242 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.032997 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.044500 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.055340 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.055381 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.055391 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.055408 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.055419 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.057914 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.069539 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.085507 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.104266 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.117024 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.131471 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.143861 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.157817 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.157867 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.157883 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.157905 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.157919 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.160128 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.173919 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.188074 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.204238 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.231593 4758 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.260851 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.260907 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.260928 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.260956 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.261074 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.275508 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.301463 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.342340 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.364137 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.364191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.364203 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.364225 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.364238 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.381307 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.466279 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.466317 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.466328 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.466343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.466354 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.568510 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.568539 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.568547 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.568558 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.568567 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.671250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.671579 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.671660 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.671824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.671919 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.707383 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:32:07.403698677 +0000 UTC Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.767829 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.767848 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.768167 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:26 crc kubenswrapper[4758]: E0130 08:30:26.768314 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:26 crc kubenswrapper[4758]: E0130 08:30:26.768510 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:26 crc kubenswrapper[4758]: E0130 08:30:26.768549 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.774758 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.774783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.774791 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.774802 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.774811 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.877894 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.877925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.877936 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.877952 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.877963 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.979968 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.979999 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.980008 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.980020 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.980029 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.082013 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.082326 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.082334 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.082347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.082356 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.184555 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.184602 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.184618 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.184640 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.184657 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.287257 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.287296 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.287304 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.287319 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.287328 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.389579 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.389634 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.389650 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.389673 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.389690 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.491665 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.491716 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.491733 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.491756 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.491773 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.594225 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.594290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.594310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.594333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.594350 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.697698 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.697775 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.697799 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.697828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.697851 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.708350 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:23:53.805352423 +0000 UTC Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.800895 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.800951 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.800968 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.800994 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.801012 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.904424 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.904469 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.904482 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.904499 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.904511 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.930073 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq"] Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.930823 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.933761 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.940244 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.952649 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:27Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.968301 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:27Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.991191 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:27Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.002679 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/0.log" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.006650 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.006681 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.006693 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.006731 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.006743 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.006755 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac" exitCode=1 Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.006803 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.007899 4758 scope.go:117] "RemoveContainer" containerID="706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.016649 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.027985 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0944aacc-db22-4503-990b-f5724b55d4ae-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.028079 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96fvp\" (UniqueName: \"kubernetes.io/projected/0944aacc-db22-4503-990b-f5724b55d4ae-kube-api-access-96fvp\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.028632 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0944aacc-db22-4503-990b-f5724b55d4ae-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.029710 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0944aacc-db22-4503-990b-f5724b55d4ae-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.037794 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.052344 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.070462 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.086495 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.099506 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.110629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.110672 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.110683 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.110702 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.110714 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.113699 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.127894 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.130403 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0944aacc-db22-4503-990b-f5724b55d4ae-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.130645 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0944aacc-db22-4503-990b-f5724b55d4ae-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.130797 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96fvp\" (UniqueName: \"kubernetes.io/projected/0944aacc-db22-4503-990b-f5724b55d4ae-kube-api-access-96fvp\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.130969 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0944aacc-db22-4503-990b-f5724b55d4ae-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.131137 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0944aacc-db22-4503-990b-f5724b55d4ae-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.131222 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0944aacc-db22-4503-990b-f5724b55d4ae-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.144353 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0944aacc-db22-4503-990b-f5724b55d4ae-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.144684 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.152249 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96fvp\" (UniqueName: \"kubernetes.io/projected/0944aacc-db22-4503-990b-f5724b55d4ae-kube-api-access-96fvp\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.160468 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.176838 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.194004 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.206902 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.212660 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.212697 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.212707 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.212723 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.212733 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.219549 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.236371 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.245003 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.254312 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\".AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 08:30:27.003894 5947 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004435 5947 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004494 5947 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004570 5947 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004775 5947 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.005505 5947 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.005540 5947 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 08:30:27.005679 5947 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: W0130 08:30:28.258512 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0944aacc_db22_4503_990b_f5724b55d4ae.slice/crio-207d8c285793572f58c7ac419f8097e9b188dcbbd09ce973bab562ddf0e2bae5 WatchSource:0}: Error finding container 207d8c285793572f58c7ac419f8097e9b188dcbbd09ce973bab562ddf0e2bae5: Status 404 returned error can't find the container with id 207d8c285793572f58c7ac419f8097e9b188dcbbd09ce973bab562ddf0e2bae5 Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.267115 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.281758 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.306546 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.317091 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.317129 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.317139 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.317183 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.317194 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.318637 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.329521 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.339281 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.349975 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.360892 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.378750 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.395922 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.409001 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.420077 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.420115 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.420129 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.420145 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.420157 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.425488 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.439251 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.521973 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.522007 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.522018 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.522034 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.522061 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.624854 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.624896 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.624906 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.624919 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.624927 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.709079 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:15:37.172109944 +0000 UTC Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.727272 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.727318 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.727329 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.727346 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.727358 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.747861 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.767952 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.768018 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:28 crc kubenswrapper[4758]: E0130 08:30:28.768122 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.768127 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:28 crc kubenswrapper[4758]: E0130 08:30:28.768237 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:28 crc kubenswrapper[4758]: E0130 08:30:28.768426 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.769208 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\".AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 08:30:27.003894 5947 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004435 5947 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004494 5947 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004570 5947 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004775 5947 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.005505 5947 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.005540 5947 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 08:30:27.005679 5947 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.783848 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.809272 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.829900 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.829943 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.829956 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.829972 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.829984 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.830326 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.850645 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.864532 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.889736 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.912629 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.932207 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.932242 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.932250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.932263 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.932272 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.952702 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.971577 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.988376 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.999257 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.011975 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/0.log" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.014259 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.014932 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" event={"ID":"0944aacc-db22-4503-990b-f5724b55d4ae","Type":"ContainerStarted","Data":"207d8c285793572f58c7ac419f8097e9b188dcbbd09ce973bab562ddf0e2bae5"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.017013 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:29Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.030560 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:29Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.038807 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.038857 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.038870 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.038889 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.038904 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.046981 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:29Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.058855 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:29Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.140708 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.140734 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.140742 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.140754 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.140765 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.242910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.243319 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.243798 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.243871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.243934 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.346526 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.346586 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.346599 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.346617 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.346652 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.449080 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.449111 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.449139 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.449157 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.449168 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.551517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.551559 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.551570 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.551586 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.551596 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.654226 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.654272 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.654286 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.654309 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.654323 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.709199 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 04:50:15.847761413 +0000 UTC Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.756916 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.756975 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.756990 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.757008 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.757024 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.859630 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.859659 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.859670 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.859682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.859694 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.962332 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.962371 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.962383 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.962401 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.962416 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.020660 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" event={"ID":"0944aacc-db22-4503-990b-f5724b55d4ae","Type":"ContainerStarted","Data":"1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.021114 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" event={"ID":"0944aacc-db22-4503-990b-f5724b55d4ae","Type":"ContainerStarted","Data":"e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.022734 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/1.log" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.023488 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/0.log" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.027132 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6" exitCode=1 Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.027180 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.027357 4758 scope.go:117] "RemoveContainer" containerID="706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.028027 4758 scope.go:117] "RemoveContainer" containerID="beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.028286 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.065033 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.065099 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.065112 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.065129 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.065141 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.073863 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.095394 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.109363 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.125468 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.139647 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.151821 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.165381 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-gj6b4"] Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.166125 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.166269 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.167429 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.167579 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.167672 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.167766 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.167857 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.168881 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.180549 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.192985 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.207525 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.218822 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.231642 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.242067 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.253699 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.253763 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zv8j\" (UniqueName: \"kubernetes.io/projected/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-kube-api-access-9zv8j\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.253811 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.266377 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.270452 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.270487 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.270498 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.270513 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.270524 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.289701 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\".AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 08:30:27.003894 5947 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004435 5947 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004494 5947 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004570 5947 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004775 5947 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.005505 5947 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.005540 5947 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 08:30:27.005679 5947 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.300228 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.311268 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.325154 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.338419 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.349206 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.354567 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zv8j\" (UniqueName: \"kubernetes.io/projected/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-kube-api-access-9zv8j\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.354655 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.354785 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.354867 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs podName:83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:30.854827842 +0000 UTC m=+35.827139393 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs") pod "network-metrics-daemon-gj6b4" (UID: "83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.359761 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.369930 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.371139 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zv8j\" (UniqueName: \"kubernetes.io/projected/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-kube-api-access-9zv8j\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.372791 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.372847 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.372869 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.372886 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.372898 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.386080 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.397158 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.408633 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.421106 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.446017 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\".AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 08:30:27.003894 5947 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004435 5947 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004494 5947 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004570 5947 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004775 5947 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.005505 5947 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.005540 5947 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 08:30:27.005679 5947 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:29Z\\\",\\\"message\\\":\\\"olumns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08:30:29.351524 6111 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352789 6111 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.353501 6111 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0130 08:30:29.353511 6111 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 08:30:29.353517 6111 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352752 6111 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}\\\\nI0130 08:30:29.353541 6111 services_controller.go:360] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api for network=default : 4.891663ms\\\\nF0130 08:30:29.352795 6111 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.455823 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.455899 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.455920 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:30:46.455900331 +0000 UTC m=+51.428211882 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.455979 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.455990 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.456008 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:46.456001105 +0000 UTC m=+51.428312656 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.456066 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.456106 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:46.456096798 +0000 UTC m=+51.428408349 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.464512 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.475588 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.475625 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.475656 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.475673 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.475685 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.486351 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.498085 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.507197 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.520020 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.556567 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.556607 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.556731 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.556744 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.556754 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.556763 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.556793 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.556800 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:46.556788405 +0000 UTC m=+51.529099956 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.556805 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.556854 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:46.556839816 +0000 UTC m=+51.529151367 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.578841 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.578901 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.578917 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.578943 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.578960 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.681341 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.681379 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.681388 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.681406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.681418 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.710806 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:16:07.940673444 +0000 UTC Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.767663 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.767765 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.767673 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.767915 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.767986 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.768282 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.789313 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.789359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.789373 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.789393 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.789403 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.859578 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.859724 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.859769 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs podName:83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:31.859756353 +0000 UTC m=+36.832067904 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs") pod "network-metrics-daemon-gj6b4" (UID: "83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.891648 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.891675 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.891682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.891694 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.891703 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.994188 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.994492 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.994582 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.994676 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.994833 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.036539 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/1.log" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.039143 4758 scope.go:117] "RemoveContainer" containerID="beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6" Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.039301 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.052507 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.072352 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.096857 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.096908 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.096925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.096948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.096964 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.097199 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:29Z\\\",\\\"message\\\":\\\"olumns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08:30:29.351524 6111 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352789 6111 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.353501 6111 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0130 08:30:29.353511 6111 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 08:30:29.353517 6111 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352752 6111 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}\\\\nI0130 08:30:29.353541 6111 services_controller.go:360] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api for network=default : 4.891663ms\\\\nF0130 08:30:29.352795 6111 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.115073 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.127789 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.138483 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.155903 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.169716 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.182049 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.183196 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.183261 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.183275 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.183293 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.183306 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.196320 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.199402 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.200463 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.200499 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.200512 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.200531 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.200544 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.214369 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.215620 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.221439 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.221496 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.221516 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.221546 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.221567 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.231459 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.237520 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.242186 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.242224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.242237 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.242253 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.242267 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.243742 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.256649 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.260098 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.260135 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.260144 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.260157 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.260166 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.260188 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.272828 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.272933 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.274645 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.274669 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.274677 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.274689 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.274698 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.276969 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.288767 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.303156 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.377437 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.377484 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.377494 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.377510 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.377522 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.480928 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.481245 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.481370 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.481488 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.481677 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.583878 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.583935 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.583951 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.583968 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.583979 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.686932 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.686989 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.687001 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.687018 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.687033 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.711503 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 22:28:36.293089809 +0000 UTC Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.768533 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.768659 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.791462 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.791640 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.791726 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.791813 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.791915 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.869060 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.869217 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.869267 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs podName:83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:33.869253986 +0000 UTC m=+38.841565527 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs") pod "network-metrics-daemon-gj6b4" (UID: "83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.894814 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.894864 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.894878 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.894896 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.894908 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.997361 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.997407 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.997418 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.997431 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.997459 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.100100 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.100163 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.100184 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.100211 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.100232 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.202973 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.203071 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.203096 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.203125 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.203149 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.306298 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.306329 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.306338 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.306351 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.306360 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.409954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.409982 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.409990 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.410001 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.410010 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.511710 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.511753 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.511767 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.511784 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.511802 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.615473 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.615517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.615529 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.615546 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.615562 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.712277 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 04:47:53.513045414 +0000 UTC Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.717885 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.717919 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.717930 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.717946 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.717958 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.768485 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:32 crc kubenswrapper[4758]: E0130 08:30:32.768697 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.768577 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:32 crc kubenswrapper[4758]: E0130 08:30:32.768857 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.768598 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:32 crc kubenswrapper[4758]: E0130 08:30:32.769194 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.820853 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.820895 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.820910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.820929 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.820945 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.922829 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.922880 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.922896 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.922944 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.922961 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.025669 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.025733 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.025744 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.025759 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.025768 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.128367 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.128402 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.128410 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.128424 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.128435 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.231293 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.231353 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.231366 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.231382 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.231396 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.333431 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.333462 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.333471 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.333486 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.333496 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.436638 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.437030 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.437210 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.437330 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.437450 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.540950 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.541254 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.541432 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.541564 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.541687 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.645619 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.645665 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.645676 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.645695 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.645713 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.712869 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:21:08.609220653 +0000 UTC Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.748677 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.748734 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.748746 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.748763 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.748777 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.768668 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:33 crc kubenswrapper[4758]: E0130 08:30:33.769145 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.851745 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.851783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.851793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.851805 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.851815 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.892254 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:33 crc kubenswrapper[4758]: E0130 08:30:33.892579 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:33 crc kubenswrapper[4758]: E0130 08:30:33.893467 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs podName:83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:37.893368179 +0000 UTC m=+42.865679770 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs") pod "network-metrics-daemon-gj6b4" (UID: "83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.956255 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.956811 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.957086 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.957283 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.957535 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.061677 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.061718 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.061729 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.061769 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.061780 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.165653 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.166310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.166409 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.166489 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.166547 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.269761 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.269811 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.269829 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.269852 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.269869 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.372468 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.372506 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.372514 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.372528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.372539 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.476160 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.476226 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.476248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.476273 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.476291 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.579112 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.579188 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.579211 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.579238 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.579257 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.681778 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.681831 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.681840 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.681853 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.681863 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.713958 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 14:21:32.618158121 +0000 UTC Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.767853 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.768001 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:34 crc kubenswrapper[4758]: E0130 08:30:34.768121 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:34 crc kubenswrapper[4758]: E0130 08:30:34.768256 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.768371 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:34 crc kubenswrapper[4758]: E0130 08:30:34.768589 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.786440 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.786708 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.786781 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.786904 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.786988 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.890784 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.890825 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.890838 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.890856 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.890867 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.994534 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.994579 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.994593 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.994609 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.994621 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.097312 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.097385 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.097399 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.097419 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.097432 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.199793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.199831 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.199839 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.199854 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.199864 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.302235 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.302262 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.302270 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.302285 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.302300 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.404843 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.405117 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.405210 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.405291 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.405410 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.507197 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.507435 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.507493 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.507549 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.507615 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.610573 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.610785 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.610854 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.610918 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.610974 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.713730 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.713773 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.713783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.713799 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.713809 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.714114 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 14:59:16.797740051 +0000 UTC Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.768102 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:35 crc kubenswrapper[4758]: E0130 08:30:35.768255 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.785650 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.805617 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.817113 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.817180 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.817198 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.817222 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.817268 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.820101 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.831634 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.845827 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.862145 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.875127 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.889662 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.900096 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.915170 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.919551 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.919578 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.919587 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.919600 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.919610 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.929679 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.947264 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.962063 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.976122 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.995878 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.010441 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.022180 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.022213 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.022224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.022239 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.022249 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.031208 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:29Z\\\",\\\"message\\\":\\\"olumns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08:30:29.351524 6111 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352789 6111 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.353501 6111 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0130 08:30:29.353511 6111 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 08:30:29.353517 6111 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352752 6111 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}\\\\nI0130 08:30:29.353541 6111 services_controller.go:360] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api for network=default : 4.891663ms\\\\nF0130 08:30:29.352795 6111 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.124577 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.124623 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.124633 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.124655 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.124668 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.227701 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.227768 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.227795 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.227830 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.227857 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.331467 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.331536 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.331563 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.331598 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.331622 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.436160 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.436259 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.436279 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.436351 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.436381 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.540519 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.540582 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.540601 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.540627 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.540646 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.645007 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.645228 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.645312 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.645347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.645411 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.715117 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 23:59:05.725606096 +0000 UTC Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.749543 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.749615 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.749629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.749653 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.749670 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.767958 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.768011 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.768112 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:36 crc kubenswrapper[4758]: E0130 08:30:36.768192 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:36 crc kubenswrapper[4758]: E0130 08:30:36.768721 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:36 crc kubenswrapper[4758]: E0130 08:30:36.768795 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.854605 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.855224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.855250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.855283 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.855306 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.960333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.960421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.960478 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.960514 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.960575 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.062855 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.062906 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.062925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.062949 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.062970 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.166406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.166600 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.166904 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.166933 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.166950 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.271018 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.271121 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.271140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.271169 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.271191 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.374313 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.374373 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.374392 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.374419 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.374435 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.483204 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.483252 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.483265 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.483282 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.483310 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.586492 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.586566 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.586584 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.586612 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.586631 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.690004 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.690099 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.690116 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.690134 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.690145 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.716135 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 23:25:25.33079176 +0000 UTC Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.768678 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:37 crc kubenswrapper[4758]: E0130 08:30:37.769017 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.793783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.793847 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.793864 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.793892 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.793912 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.897315 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.897487 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.897508 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.897542 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.897563 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.945070 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:37 crc kubenswrapper[4758]: E0130 08:30:37.945385 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:37 crc kubenswrapper[4758]: E0130 08:30:37.945556 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs podName:83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:45.945517627 +0000 UTC m=+50.917829208 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs") pod "network-metrics-daemon-gj6b4" (UID: "83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.000838 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.000911 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.000932 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.000956 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.000972 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.104817 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.104891 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.104910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.104933 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.104950 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.209473 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.209584 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.209611 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.209648 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.209671 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.313546 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.313617 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.313629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.313652 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.313669 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.417112 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.417586 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.417675 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.417765 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.417843 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.522724 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.522783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.522801 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.522829 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.522849 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.626524 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.626561 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.626570 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.626586 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.626599 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.716962 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 18:11:35.583947473 +0000 UTC Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.729173 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.729233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.729242 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.729290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.729303 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.768607 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:38 crc kubenswrapper[4758]: E0130 08:30:38.768758 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.769013 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:38 crc kubenswrapper[4758]: E0130 08:30:38.770404 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.770548 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:38 crc kubenswrapper[4758]: E0130 08:30:38.770922 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.833747 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.833797 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.833809 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.833832 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.833848 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.937862 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.937930 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.937943 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.937976 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.937990 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.041306 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.041351 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.041362 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.041377 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.041385 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.144140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.144177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.144187 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.144206 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.144218 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.247592 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.247630 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.247640 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.247697 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.247710 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.351330 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.351384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.351399 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.351436 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.351455 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.454442 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.454478 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.454487 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.454500 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.454512 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.556997 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.557029 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.557049 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.557062 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.557072 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.660128 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.660178 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.660186 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.660201 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.660211 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.718110 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:41:59.1303438 +0000 UTC Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.762358 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.762414 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.762423 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.762437 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.762446 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.767577 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:39 crc kubenswrapper[4758]: E0130 08:30:39.767868 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.866433 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.866488 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.866502 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.866523 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.866536 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.969863 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.969940 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.969960 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.969991 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.970012 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.073767 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.073825 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.073837 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.073856 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.073872 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.176951 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.176998 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.177012 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.177027 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.177070 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.280383 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.280456 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.280474 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.280501 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.280522 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.384987 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.385105 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.385122 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.385151 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.385171 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.488988 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.489127 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.489152 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.489192 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.489219 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.593354 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.593425 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.593441 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.593469 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.593485 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.697415 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.697515 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.697542 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.697576 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.697601 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.718988 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 17:30:28.823094701 +0000 UTC Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.768431 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.768435 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:40 crc kubenswrapper[4758]: E0130 08:30:40.768633 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:40 crc kubenswrapper[4758]: E0130 08:30:40.768725 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.768455 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:40 crc kubenswrapper[4758]: E0130 08:30:40.768847 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.801605 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.801661 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.801678 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.801707 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.801732 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.905429 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.905542 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.905568 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.905605 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.905630 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.008740 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.008806 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.008824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.008852 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.008872 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.113479 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.113544 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.113566 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.113594 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.113613 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.217356 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.217433 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.217453 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.217484 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.217504 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.293232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.293320 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.293348 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.293385 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.293411 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: E0130 08:30:41.319178 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:41Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.326354 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.326407 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.326421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.326447 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.326461 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: E0130 08:30:41.341905 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:41Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.347942 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.348008 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.348035 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.348093 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.348114 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: E0130 08:30:41.370077 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:41Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.375364 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.375423 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.375444 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.375475 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.375494 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: E0130 08:30:41.395677 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:41Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.406875 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.406932 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.406965 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.406995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.407014 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: E0130 08:30:41.432021 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:41Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:41 crc kubenswrapper[4758]: E0130 08:30:41.432405 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.435861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.435948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.435967 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.436023 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.436098 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.539971 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.540056 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.540069 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.540095 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.540109 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.643796 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.643874 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.643891 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.643915 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.643963 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.719287 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 07:08:09.292688719 +0000 UTC Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.748291 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.748396 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.748417 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.748449 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.748471 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.768568 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:41 crc kubenswrapper[4758]: E0130 08:30:41.768743 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.852113 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.852153 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.852163 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.852181 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.852193 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.955463 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.955502 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.955513 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.955536 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.955548 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.058807 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.059133 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.059238 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.059302 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.059368 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.087146 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.100149 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.111263 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.133966 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.162610 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.162701 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.162732 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.162769 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.162797 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.163190 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:29Z\\\",\\\"message\\\":\\\"olumns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08:30:29.351524 6111 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352789 6111 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.353501 6111 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0130 08:30:29.353511 6111 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 08:30:29.353517 6111 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352752 6111 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}\\\\nI0130 08:30:29.353541 6111 services_controller.go:360] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api for network=default : 4.891663ms\\\\nF0130 08:30:29.352795 6111 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.178242 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.197120 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.221296 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.234071 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.246269 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.258729 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.265211 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.265267 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.265280 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.265303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.265318 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.271079 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.283138 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.295823 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.306144 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.320547 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.336636 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.352618 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.365216 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.368170 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.368214 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.368232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.368258 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.368278 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.471087 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.471135 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.471145 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.471161 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.471172 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.573214 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.573262 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.573276 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.573304 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.573317 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.675546 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.675594 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.675603 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.675616 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.675627 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.721113 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 13:23:31.878844656 +0000 UTC Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.768505 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.768534 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:42 crc kubenswrapper[4758]: E0130 08:30:42.768650 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.768677 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:42 crc kubenswrapper[4758]: E0130 08:30:42.768776 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:42 crc kubenswrapper[4758]: E0130 08:30:42.768858 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.777664 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.777939 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.778025 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.778121 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.778197 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.880922 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.880954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.880963 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.880976 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.880984 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.983790 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.983834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.983844 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.983861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.983870 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.086170 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.086204 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.086214 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.086227 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.086237 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.188652 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.188695 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.188707 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.188724 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.188737 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.291543 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.291599 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.291615 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.291637 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.291655 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.394485 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.394520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.394530 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.394544 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.394556 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.496945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.496973 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.496985 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.497000 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.497012 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.599812 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.599862 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.599880 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.599940 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.599958 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.702639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.702682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.702696 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.702714 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.702729 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.721757 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:12:07.686911784 +0000 UTC Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.767845 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:43 crc kubenswrapper[4758]: E0130 08:30:43.767980 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.769213 4758 scope.go:117] "RemoveContainer" containerID="beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.806073 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.806118 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.806130 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.806151 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.806163 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.908974 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.909379 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.909618 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.909878 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.910098 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.012422 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.012480 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.012507 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.012527 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.012543 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.116214 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.116248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.116259 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.116303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.116319 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.218702 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.218731 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.218740 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.218751 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.218794 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.321497 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.321558 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.321574 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.321595 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.321612 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.423880 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.423917 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.423925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.423940 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.423982 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.526309 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.526344 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.526352 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.526364 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.526372 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.628143 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.628178 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.628186 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.628201 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.628210 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.723169 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 17:19:16.856169978 +0000 UTC Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.734816 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.734849 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.734859 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.734874 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.734884 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.767694 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.767819 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:44 crc kubenswrapper[4758]: E0130 08:30:44.767821 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:44 crc kubenswrapper[4758]: E0130 08:30:44.767899 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.767924 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:44 crc kubenswrapper[4758]: E0130 08:30:44.767961 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.837325 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.837353 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.837363 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.837377 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.837387 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.938844 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.938870 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.938878 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.938890 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.938898 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.040996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.041032 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.041060 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.041077 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.041089 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.089801 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/2.log" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.090295 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/1.log" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.092379 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037" exitCode=1 Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.092419 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.092454 4758 scope.go:117] "RemoveContainer" containerID="beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.093032 4758 scope.go:117] "RemoveContainer" containerID="bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037" Jan 30 08:30:45 crc kubenswrapper[4758]: E0130 08:30:45.093266 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.106303 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.115115 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.124679 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.135172 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.142903 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.142936 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.142946 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.142960 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.142973 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.146715 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.157357 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.169185 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.190948 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.204078 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.216194 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.227930 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.239772 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.244438 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.244506 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.244519 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.244535 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.244547 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.257596 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:29Z\\\",\\\"message\\\":\\\"olumns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08:30:29.351524 6111 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352789 6111 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.353501 6111 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0130 08:30:29.353511 6111 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 08:30:29.353517 6111 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352752 6111 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}\\\\nI0130 08:30:29.353541 6111 services_controller.go:360] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api for network=default : 4.891663ms\\\\nF0130 08:30:29.352795 6111 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.270457 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.283847 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.297743 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.311292 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.335241 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.346281 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.346320 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.346329 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.346345 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.346354 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.449162 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.449326 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.449340 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.449356 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.449368 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.552574 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.552628 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.552640 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.552656 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.552666 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.655206 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.655242 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.655250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.655264 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.655273 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.724108 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 01:26:04.919712975 +0000 UTC Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.758021 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.758448 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.758536 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.758701 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.758806 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.768399 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:45 crc kubenswrapper[4758]: E0130 08:30:45.768584 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.788152 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.799797 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.813935 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.826204 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.839957 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.854941 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.860230 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.860255 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.860263 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.860277 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.860286 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.872130 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.886500 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.897641 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.910795 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.921876 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.945236 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:29Z\\\",\\\"message\\\":\\\"olumns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08:30:29.351524 6111 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352789 6111 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.353501 6111 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0130 08:30:29.353511 6111 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 08:30:29.353517 6111 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352752 6111 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}\\\\nI0130 08:30:29.353541 6111 services_controller.go:360] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api for network=default : 4.891663ms\\\\nF0130 08:30:29.352795 6111 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.947533 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:45 crc kubenswrapper[4758]: E0130 08:30:45.947691 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:45 crc kubenswrapper[4758]: E0130 08:30:45.947758 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs podName:83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4 nodeName:}" failed. No retries permitted until 2026-01-30 08:31:01.947742061 +0000 UTC m=+66.920053632 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs") pod "network-metrics-daemon-gj6b4" (UID: "83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.958574 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.961639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.961776 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.961860 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.961941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.962020 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.980193 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.998502 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.009138 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:46Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.024185 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:46Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.035207 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:46Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.064844 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.064874 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.064882 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.064895 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.064906 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.096444 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/2.log" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.167170 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.167419 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.167526 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.167633 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.167721 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.270613 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.270655 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.270662 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.270681 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.270690 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.373879 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.374460 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.374585 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.374716 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.374878 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.476889 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.476920 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.476930 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.476945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.476956 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.552080 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.552213 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:31:18.552194544 +0000 UTC m=+83.524506095 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.552643 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.552821 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.552855 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.552960 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.553135 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:31:18.553120773 +0000 UTC m=+83.525432344 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.553236 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:31:18.553222546 +0000 UTC m=+83.525534097 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.579134 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.579174 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.579199 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.579216 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.579228 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.654361 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.654699 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.654567 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.655137 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.655242 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.654932 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.655359 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.655377 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.655441 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:31:18.655420709 +0000 UTC m=+83.627732270 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.655607 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:31:18.655588485 +0000 UTC m=+83.627900046 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.682344 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.682390 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.682403 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.682421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.682435 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.724833 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:37:28.300932022 +0000 UTC Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.767658 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.767783 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.767655 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.767671 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.767931 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.767980 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.784739 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.784805 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.784817 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.784835 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.784847 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.887180 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.887219 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.887229 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.887245 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.887255 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.989570 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.989606 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.989617 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.989635 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.989647 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.091611 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.091644 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.091654 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.091668 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.091678 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.193881 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.193938 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.193953 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.193971 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.193984 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.296293 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.296339 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.296351 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.296369 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.296380 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.399300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.399340 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.399351 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.399368 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.399378 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.501570 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.501610 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.501623 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.501639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.501650 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.604191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.604232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.604245 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.604260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.604272 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.706384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.706412 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.706423 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.706436 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.706446 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.725765 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 23:27:07.759762232 +0000 UTC Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.768585 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:47 crc kubenswrapper[4758]: E0130 08:30:47.768758 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.809194 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.809356 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.809452 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.809507 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.809532 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.912637 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.912669 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.912677 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.912690 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.912705 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.015591 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.015649 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.015662 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.015680 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.015692 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.119885 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.119926 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.119936 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.119954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.119965 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.222806 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.222855 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.222868 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.222887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.222901 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.325917 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.326013 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.326033 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.326100 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.326122 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.429653 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.429688 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.429699 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.429718 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.429730 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.532683 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.532746 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.532774 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.532802 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.532824 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.635800 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.636094 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.636227 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.636368 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.636497 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.726595 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 21:09:06.461208833 +0000 UTC Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.740422 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.740480 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.740507 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.740537 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.740560 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.767910 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.767987 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.768011 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:48 crc kubenswrapper[4758]: E0130 08:30:48.768154 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:48 crc kubenswrapper[4758]: E0130 08:30:48.768297 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:48 crc kubenswrapper[4758]: E0130 08:30:48.768461 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.843620 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.843657 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.843668 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.843683 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.843694 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.948459 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.948556 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.948579 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.948609 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.948641 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.051678 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.051915 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.051987 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.052071 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.052137 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.154664 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.154779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.154794 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.154817 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.154830 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.214506 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.215313 4758 scope.go:117] "RemoveContainer" containerID="bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037" Jan 30 08:30:49 crc kubenswrapper[4758]: E0130 08:30:49.215458 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.231666 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.246393 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.257631 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.257677 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.257688 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.257705 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.257716 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.266922 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.285991 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.303792 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.320666 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.337408 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.352796 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.360558 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.360578 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.360587 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.360599 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.360609 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.368400 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.397544 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.420226 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.436002 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.462898 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.462954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.462973 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.462998 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.463014 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.465849 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.483009 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.498484 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.520999 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.540630 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.559524 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.566262 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.566493 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.566706 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.566858 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.566996 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.671335 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.671406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.671426 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.671455 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.671474 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.727107 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:03:42.730326562 +0000 UTC Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.767882 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:49 crc kubenswrapper[4758]: E0130 08:30:49.768441 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.773420 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.773482 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.773508 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.773539 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.773564 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.876717 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.876798 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.876811 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.876828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.876845 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.980278 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.980750 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.980952 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.981203 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.981404 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.083885 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.083962 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.083978 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.083997 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.084064 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.187425 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.187480 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.187497 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.187520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.187537 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.290780 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.290844 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.290864 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.290890 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.290908 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.393991 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.394072 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.394088 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.394109 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.394129 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.500763 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.500804 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.500814 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.500828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.500839 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.603733 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.603780 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.603797 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.603822 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.603838 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.706728 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.706783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.706795 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.706812 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.706825 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.728251 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 17:22:12.893004759 +0000 UTC Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.767541 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.767562 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:50 crc kubenswrapper[4758]: E0130 08:30:50.767681 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.767869 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:50 crc kubenswrapper[4758]: E0130 08:30:50.767929 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:50 crc kubenswrapper[4758]: E0130 08:30:50.768122 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.808805 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.808846 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.808859 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.808875 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.808887 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.911559 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.911589 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.911600 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.911617 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.911629 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.014691 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.014786 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.014816 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.014853 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.014876 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.116899 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.116939 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.116953 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.116970 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.116982 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.219406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.219450 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.219462 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.219480 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.219493 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.322448 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.322501 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.322520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.322549 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.322572 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.425667 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.425735 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.425759 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.425791 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.425812 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.528425 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.528492 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.528517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.528547 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.528568 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.631792 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.631849 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.631867 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.631890 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.631910 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.693387 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.693444 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.693461 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.693483 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.693500 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: E0130 08:30:51.714828 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.719109 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.719288 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.719419 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.719524 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.719608 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.729017 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 16:37:50.035972504 +0000 UTC Jan 30 08:30:51 crc kubenswrapper[4758]: E0130 08:30:51.762144 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.768407 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:51 crc kubenswrapper[4758]: E0130 08:30:51.768621 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.774285 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.774348 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.774369 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.774394 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.774413 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: E0130 08:30:51.805799 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.810458 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.810503 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.810515 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.810532 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.810542 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: E0130 08:30:51.828350 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.831948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.831977 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.831989 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.832006 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.832018 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: E0130 08:30:51.843976 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:51 crc kubenswrapper[4758]: E0130 08:30:51.844370 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.845843 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.845881 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.845892 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.845906 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.845916 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.948471 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.948736 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.948850 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.948943 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.949021 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.050899 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.050930 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.050938 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.050951 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.050960 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.153661 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.153726 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.153748 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.153783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.153807 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.256708 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.256771 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.256790 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.256816 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.256833 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.359300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.359358 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.359374 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.359399 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.359418 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.462204 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.462258 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.462275 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.462297 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.462313 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.565145 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.565304 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.565333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.565360 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.565383 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.667802 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.667833 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.667841 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.667855 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.667863 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.730457 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:50:29.652682684 +0000 UTC Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.768713 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:52 crc kubenswrapper[4758]: E0130 08:30:52.768914 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.769257 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.769297 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:52 crc kubenswrapper[4758]: E0130 08:30:52.769411 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:52 crc kubenswrapper[4758]: E0130 08:30:52.769488 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.770923 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.770969 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.770985 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.771009 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.771026 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.873741 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.873801 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.873818 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.873841 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.873857 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.976763 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.976834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.976856 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.976902 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.976925 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.079528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.079604 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.079621 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.079648 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.079666 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.182322 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.182360 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.182373 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.182386 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.182396 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.284570 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.284617 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.284629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.284646 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.284656 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.387165 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.387230 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.387243 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.387258 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.387270 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.489881 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.489907 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.489915 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.489926 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.489934 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.592406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.592452 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.592467 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.592487 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.592502 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.695216 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.695287 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.695304 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.695326 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.695345 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.731243 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 14:09:00.215245171 +0000 UTC Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.768793 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:53 crc kubenswrapper[4758]: E0130 08:30:53.768993 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.797111 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.797367 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.797545 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.797678 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.797791 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.900836 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.900890 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.900907 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.900929 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.900945 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.003759 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.004332 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.004547 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.004749 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.004918 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.108372 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.108426 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.108442 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.108466 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.108485 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.210460 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.210860 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.211024 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.211272 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.211468 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.313733 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.313764 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.313774 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.313786 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.313794 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.416737 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.416809 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.416831 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.416859 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.416883 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.519285 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.519339 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.519353 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.519372 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.519387 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.622989 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.623418 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.623634 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.624068 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.624456 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.727026 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.727076 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.727088 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.727102 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.727111 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.731393 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:50:25.840520059 +0000 UTC Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.767904 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.767971 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:54 crc kubenswrapper[4758]: E0130 08:30:54.768017 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:54 crc kubenswrapper[4758]: E0130 08:30:54.768130 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.768647 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:54 crc kubenswrapper[4758]: E0130 08:30:54.769014 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.828990 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.829254 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.829358 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.829462 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.829570 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.932535 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.932579 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.932591 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.932608 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.932621 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.034893 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.034944 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.034955 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.034978 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.034989 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.137212 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.137243 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.137259 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.137306 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.137324 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.239870 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.240205 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.240296 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.240406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.240487 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.343158 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.343478 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.343590 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.343703 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.343792 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.446505 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.446535 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.446544 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.446558 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.446570 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.548852 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.548897 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.548909 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.548927 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.548939 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.651310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.651350 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.651361 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.651378 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.651390 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.731879 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:48:38.707214196 +0000 UTC Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.753140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.753169 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.753177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.753190 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.753199 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.768716 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:55 crc kubenswrapper[4758]: E0130 08:30:55.768903 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.785370 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.797956 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.810027 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.823846 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.840358 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.856793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.856830 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.856842 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.856900 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.856915 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.856901 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.872022 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.884132 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.897193 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.908770 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.922368 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.935019 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.957773 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.958850 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.958883 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.958898 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.958914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.958924 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.970838 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.989229 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.002268 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.011477 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.026901 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.060404 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.060430 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.060437 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.060449 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.060458 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.162680 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.162716 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.162726 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.162740 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.162751 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.265463 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.265496 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.265504 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.265518 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.265527 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.367751 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.367792 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.367800 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.367814 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.367823 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.470278 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.470337 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.470348 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.470360 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.470369 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.572562 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.572846 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.572935 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.573016 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.573124 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.677551 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.677876 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.678025 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.678230 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.678374 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.732991 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:48:43.991617462 +0000 UTC Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.767594 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.767610 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:56 crc kubenswrapper[4758]: E0130 08:30:56.768414 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.767687 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:56 crc kubenswrapper[4758]: E0130 08:30:56.768601 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:56 crc kubenswrapper[4758]: E0130 08:30:56.768268 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.781943 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.782300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.782540 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.782755 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.782965 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.886721 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.887140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.887326 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.887478 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.887610 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.990292 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.990328 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.990340 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.990357 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.990369 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.092923 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.092978 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.092996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.093023 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.093075 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.195472 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.195514 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.195534 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.195552 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.195566 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.297996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.298067 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.298079 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.298094 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.298105 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.400136 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.400198 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.400208 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.400225 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.400236 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.502983 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.503055 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.503070 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.503086 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.503095 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.606340 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.606839 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.607033 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.607377 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.607626 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.710610 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.710680 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.710707 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.710738 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.710756 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.734115 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:39:18.688885816 +0000 UTC Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.767732 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:57 crc kubenswrapper[4758]: E0130 08:30:57.768101 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.813458 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.813517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.813538 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.813562 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.813579 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.916776 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.916822 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.916834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.916854 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.916867 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.019528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.019568 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.019579 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.019595 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.019604 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.121186 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.121223 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.121232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.121246 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.121258 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.223415 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.223446 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.223472 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.223485 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.223493 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.326126 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.326169 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.326180 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.326196 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.326211 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.429292 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.429333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.429341 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.429354 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.429365 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.531760 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.532103 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.532198 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.532277 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.532374 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.633948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.633982 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.633991 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.634004 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.634012 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.734846 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:03:02.09998688 +0000 UTC Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.736166 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.736205 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.736216 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.736232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.736244 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.768450 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:58 crc kubenswrapper[4758]: E0130 08:30:58.768598 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.768838 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:58 crc kubenswrapper[4758]: E0130 08:30:58.768939 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.769146 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:58 crc kubenswrapper[4758]: E0130 08:30:58.769245 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.838781 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.838819 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.838830 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.838848 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.838862 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.941883 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.941934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.941949 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.941970 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.941987 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.052312 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.052347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.052355 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.052368 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.052377 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.154111 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.154375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.154564 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.155240 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.156000 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.258152 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.258197 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.258208 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.258224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.258237 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.360004 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.360059 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.360074 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.360093 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.360105 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.462216 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.462621 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.462786 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.462949 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.463192 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.565998 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.566032 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.566069 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.566100 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.566115 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.668280 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.668504 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.668592 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.668685 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.668785 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.736293 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 19:37:16.235032916 +0000 UTC Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.768283 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:59 crc kubenswrapper[4758]: E0130 08:30:59.768420 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.771804 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.771885 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.771897 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.771915 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.771933 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.873765 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.873794 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.873806 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.873822 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.873836 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.976458 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.976492 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.976504 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.976519 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.976530 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.078739 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.078792 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.078809 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.078828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.078840 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.181293 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.181331 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.181343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.181360 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.181370 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.283828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.283864 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.283873 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.283886 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.283895 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.386693 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.386749 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.386766 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.386788 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.386805 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.489144 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.489421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.489514 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.489606 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.489699 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.591938 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.592191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.592284 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.592364 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.592426 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.694518 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.694565 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.694578 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.694596 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.694609 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.737338 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 19:23:14.895848828 +0000 UTC Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.768294 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.768358 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:00 crc kubenswrapper[4758]: E0130 08:31:00.768440 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:00 crc kubenswrapper[4758]: E0130 08:31:00.768541 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.768724 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:00 crc kubenswrapper[4758]: E0130 08:31:00.769105 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.769653 4758 scope.go:117] "RemoveContainer" containerID="bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037" Jan 30 08:31:00 crc kubenswrapper[4758]: E0130 08:31:00.770127 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.797279 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.797310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.797320 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.797335 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.797346 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.899528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.899762 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.899824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.899910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.899997 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.003770 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.003808 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.003819 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.003835 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.003847 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.107266 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.107641 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.107721 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.107800 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.107864 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.210080 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.210133 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.210153 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.210177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.210195 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.312900 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.312953 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.312966 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.312988 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.313004 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.415656 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.415704 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.415715 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.415735 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.415748 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.518008 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.518075 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.518087 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.518109 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.518123 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.621460 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.621516 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.621528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.621548 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.621560 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.724826 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.724877 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.724924 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.724948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.724963 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.737909 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 11:57:22.591915269 +0000 UTC Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.768496 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:01 crc kubenswrapper[4758]: E0130 08:31:01.768660 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.827532 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.827616 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.827629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.827676 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.827692 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.930215 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.930251 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.930259 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.930273 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.930282 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.998237 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.998648 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.998734 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.998931 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.999080 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.014820 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.014935 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.014989 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs podName:83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4 nodeName:}" failed. No retries permitted until 2026-01-30 08:31:34.014975591 +0000 UTC m=+98.987287142 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs") pod "network-metrics-daemon-gj6b4" (UID: "83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.015817 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.020466 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.020491 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.020502 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.020517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.020529 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.036217 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.040714 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.040745 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.040755 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.040769 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.040782 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.055792 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.060316 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.060351 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.060359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.060371 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.060383 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.074058 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.077870 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.077894 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.077902 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.077915 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.077925 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.092825 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.092996 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.094658 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.094708 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.094728 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.094761 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.094778 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.197725 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.197766 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.197776 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.197790 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.197799 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.300751 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.300810 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.300831 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.300930 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.300948 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.403831 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.403875 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.403887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.403907 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.403920 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.506864 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.506932 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.506950 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.506975 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.506998 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.609181 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.609495 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.609615 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.609741 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.609806 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.712774 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.713252 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.713385 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.713519 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.713640 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.739335 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:09:29.373276336 +0000 UTC Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.767777 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.767777 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.767883 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.768584 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.768707 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.768824 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.816771 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.816823 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.816835 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.816859 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.816875 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.919907 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.920306 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.920392 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.920481 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.920593 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.023661 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.023707 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.023719 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.023747 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.023758 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.126559 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.126588 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.126597 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.126611 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.126619 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.228993 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.229030 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.229059 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.229075 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.229087 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.331155 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.331195 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.331205 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.331220 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.331230 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.432995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.433024 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.433046 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.433059 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.433068 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.535306 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.535349 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.535361 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.535376 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.535386 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.637542 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.637581 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.637593 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.637612 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.637627 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.739436 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:23:24.441041943 +0000 UTC Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.740778 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.740821 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.740830 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.740846 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.740855 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.768180 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:03 crc kubenswrapper[4758]: E0130 08:31:03.768322 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.843121 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.843162 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.843174 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.843194 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.843205 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.945397 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.945438 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.945448 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.945462 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.945472 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.047941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.048197 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.048284 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.048367 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.048450 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.150274 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.150303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.150315 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.150329 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.150338 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.154253 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/0.log" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.154303 4758 generic.go:334] "Generic (PLEG): container finished" podID="fac75e9c-fc94-4c83-8613-bce0f4744079" containerID="0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb" exitCode=1 Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.154367 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99ddw" event={"ID":"fac75e9c-fc94-4c83-8613-bce0f4744079","Type":"ContainerDied","Data":"0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.155095 4758 scope.go:117] "RemoveContainer" containerID="0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.178970 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.191719 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.219380 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.230821 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.244644 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.252688 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.252818 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.252901 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.252982 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.253086 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.261261 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.275951 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.300932 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.321989 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.334498 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.346755 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.355792 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.355818 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.355829 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.355844 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.355856 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.362088 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.375185 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.386731 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:31:03Z\\\",\\\"message\\\":\\\"2026-01-30T08:30:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc\\\\n2026-01-30T08:30:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc to /host/opt/cni/bin/\\\\n2026-01-30T08:30:18Z [verbose] multus-daemon started\\\\n2026-01-30T08:30:18Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:31:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.397990 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.410616 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.424663 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.441679 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.458581 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.458615 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.458628 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.458649 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.458663 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.561780 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.561812 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.561826 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.561842 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.561853 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.664119 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.664151 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.664161 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.664177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.664188 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.740428 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:48:30.297152265 +0000 UTC Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.766200 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.766234 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.766244 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.766260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.766271 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.767535 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.767592 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:04 crc kubenswrapper[4758]: E0130 08:31:04.767639 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.767686 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:04 crc kubenswrapper[4758]: E0130 08:31:04.767724 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:04 crc kubenswrapper[4758]: E0130 08:31:04.767865 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.868517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.868550 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.868559 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.868572 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.868583 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.971075 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.971107 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.971116 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.971127 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.971136 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.073200 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.073234 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.073243 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.073255 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.073264 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.159203 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/0.log" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.159263 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99ddw" event={"ID":"fac75e9c-fc94-4c83-8613-bce0f4744079","Type":"ContainerStarted","Data":"e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.174588 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.175595 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.175624 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.175635 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.175650 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.175661 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.192027 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:31:03Z\\\",\\\"message\\\":\\\"2026-01-30T08:30:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc\\\\n2026-01-30T08:30:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc to /host/opt/cni/bin/\\\\n2026-01-30T08:30:18Z [verbose] multus-daemon started\\\\n2026-01-30T08:30:18Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:31:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:31:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.208115 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.225502 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.241006 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.258772 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.269982 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.277425 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.277477 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.277489 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.277505 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.277535 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.281975 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.295426 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.312913 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.322475 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.335738 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.346748 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.366897 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.377597 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.378938 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.378979 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.378991 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.379005 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.379014 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.387204 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.396406 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.406079 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.481296 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.481316 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.481325 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.481337 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.481346 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.583526 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.583560 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.583572 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.583588 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.583600 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.686308 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.686347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.686358 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.686375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.686385 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.740986 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:13:10.530927876 +0000 UTC Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.768536 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:05 crc kubenswrapper[4758]: E0130 08:31:05.768802 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.781499 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.788783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.788888 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.788952 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.789012 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.789097 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.802512 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.814760 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.824070 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.835970 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.846478 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.861743 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.873321 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.885105 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.890985 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.891092 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.891295 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.891496 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.891647 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.896274 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.906866 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.916829 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.926203 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.937770 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:31:03Z\\\",\\\"message\\\":\\\"2026-01-30T08:30:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc\\\\n2026-01-30T08:30:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc to /host/opt/cni/bin/\\\\n2026-01-30T08:30:18Z [verbose] multus-daemon started\\\\n2026-01-30T08:30:18Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:31:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:31:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.953001 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.967332 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.981682 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.994247 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.994299 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.994311 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.994327 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.994339 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.004081 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:06Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.095895 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.095926 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.095938 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.095954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.095967 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.197871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.198437 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.198598 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.198797 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.199029 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.302203 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.302237 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.302246 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.302260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.302268 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.404510 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.404548 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.404560 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.404578 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.404590 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.507317 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.507590 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.507689 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.507765 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.507836 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.612185 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.612248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.612271 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.612299 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.612321 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.714534 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.714587 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.714606 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.714629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.714646 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.741877 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 21:22:15.22525794 +0000 UTC Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.767912 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:06 crc kubenswrapper[4758]: E0130 08:31:06.768059 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.768122 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:06 crc kubenswrapper[4758]: E0130 08:31:06.768179 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.768223 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:06 crc kubenswrapper[4758]: E0130 08:31:06.768273 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.816746 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.816782 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.816793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.816809 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.816821 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.919352 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.919613 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.919681 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.919744 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.919819 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.021995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.022249 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.022313 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.022378 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.022435 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.124453 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.124481 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.124494 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.124509 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.124518 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.226861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.226910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.226922 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.226941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.226953 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.329000 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.329047 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.329056 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.329073 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.329088 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.431425 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.431457 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.431464 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.431477 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.431487 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.534884 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.534923 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.534934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.534954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.534964 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.638502 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.638768 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.638840 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.638918 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.638980 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.742196 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 01:30:44.545657582 +0000 UTC Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.742761 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.742906 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.742977 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.743068 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.743165 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.768247 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:07 crc kubenswrapper[4758]: E0130 08:31:07.768384 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.779546 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.846060 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.846290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.846537 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.846706 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.846826 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.949883 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.949916 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.949926 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.949940 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.949950 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.058021 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.058105 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.058123 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.058148 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.058163 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.160834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.161161 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.161231 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.161302 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.161359 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.263813 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.263853 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.263861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.263875 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.263883 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.365824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.365861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.365871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.365887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.365898 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.467782 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.467818 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.467827 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.467841 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.467852 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.570393 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.570428 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.570439 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.570454 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.570465 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.672333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.672362 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.672371 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.672384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.672393 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.742675 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:39:57.775086257 +0000 UTC Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.768549 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.768604 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:08 crc kubenswrapper[4758]: E0130 08:31:08.768657 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.768549 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:08 crc kubenswrapper[4758]: E0130 08:31:08.768737 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:08 crc kubenswrapper[4758]: E0130 08:31:08.768821 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.774360 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.774388 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.774398 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.774412 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.774421 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.877957 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.877990 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.877998 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.878011 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.878020 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.980718 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.980754 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.980763 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.980777 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.980788 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.083317 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.083352 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.083362 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.083375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.083385 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.186332 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.186381 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.186390 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.186407 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.186418 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.289681 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.289721 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.289732 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.289747 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.289758 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.392418 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.392453 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.392462 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.392476 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.392486 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.494767 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.494798 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.494811 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.494826 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.494835 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.596827 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.597058 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.597227 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.597325 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.597394 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.699856 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.700118 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.700191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.700258 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.700323 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.743427 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 11:38:08.576440834 +0000 UTC Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.767748 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:09 crc kubenswrapper[4758]: E0130 08:31:09.767889 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.802557 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.802596 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.802609 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.802625 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.802637 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.907884 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.907912 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.907921 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.907948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.907956 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.011003 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.011097 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.011113 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.011132 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.011148 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.113756 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.113784 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.113791 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.113806 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.113814 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.216298 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.216335 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.216343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.216358 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.216368 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.319322 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.319345 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.319353 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.319365 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.319373 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.422232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.422281 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.422294 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.422313 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.422325 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.525379 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.525414 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.525423 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.525437 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.525446 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.628178 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.628237 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.628259 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.628287 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.628309 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.731127 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.731197 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.731218 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.731247 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.731266 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.743882 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 07:09:05.033447475 +0000 UTC Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.768412 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.768432 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.768512 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:10 crc kubenswrapper[4758]: E0130 08:31:10.768534 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:10 crc kubenswrapper[4758]: E0130 08:31:10.768679 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:10 crc kubenswrapper[4758]: E0130 08:31:10.768827 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.833581 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.833629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.833642 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.833662 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.833677 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.937034 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.937135 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.937160 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.937192 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.937229 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.039518 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.039583 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.039601 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.039628 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.039645 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.142370 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.142420 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.142429 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.142444 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.142454 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.245160 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.245199 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.245213 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.245229 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.245240 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.348239 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.348281 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.348289 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.348303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.348314 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.452489 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.452532 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.452542 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.452557 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.452567 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.555390 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.555445 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.555462 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.555484 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.555502 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.658129 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.658177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.658186 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.658199 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.658207 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.744800 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 03:52:44.619348959 +0000 UTC Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.761344 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.761374 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.761384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.761396 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.761405 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.768438 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:11 crc kubenswrapper[4758]: E0130 08:31:11.768597 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.863816 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.863855 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.863865 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.863881 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.863891 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.966380 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.966477 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.966495 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.966519 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.966536 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.069965 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.070097 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.070122 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.070149 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.070168 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.173276 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.173322 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.173338 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.173355 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.173368 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.193723 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.193769 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.193788 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.193810 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.193827 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.215690 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.220695 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.220803 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.220828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.220860 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.220922 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.241596 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.247341 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.247400 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.247423 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.247451 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.247472 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.267755 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.272729 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.272779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.272796 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.272818 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.272862 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.292505 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.298361 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.298402 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.298417 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.298432 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.298448 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.320826 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.321097 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.323208 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.323245 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.323254 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.323270 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.323280 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.426566 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.426602 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.426610 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.426627 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.426636 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.530116 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.530189 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.530267 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.530303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.530326 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.633187 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.633258 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.633280 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.633311 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.633331 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.736116 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.736169 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.736186 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.736207 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.736223 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.745481 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:16:03.038673039 +0000 UTC Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.767602 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.767647 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.767604 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.767787 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.767887 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.768029 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.839394 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.839444 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.839457 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.839476 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.839490 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.942585 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.942926 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.943205 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.943414 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.943626 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.046947 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.046997 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.047011 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.047031 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.047080 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.150177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.150232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.150247 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.150267 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.150283 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.273936 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.273992 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.274009 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.274033 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.274081 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.376555 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.376594 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.376602 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.376615 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.376623 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.479422 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.479483 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.479501 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.479524 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.479541 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.582324 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.582381 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.582397 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.582421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.582438 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.685602 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.685672 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.685693 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.685724 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.685746 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.746438 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 13:53:37.717942551 +0000 UTC Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.767928 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:13 crc kubenswrapper[4758]: E0130 08:31:13.768181 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.788236 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.788288 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.788304 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.788327 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.788343 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.891218 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.891292 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.891315 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.891343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.891363 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.993123 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.993184 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.993200 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.993226 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.993240 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.095294 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.095333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.095343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.095359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.095369 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.197491 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.197521 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.197530 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.197541 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.197551 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.299972 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.300014 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.300025 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.300064 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.300079 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.402993 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.403144 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.403165 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.403194 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.403213 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.505602 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.505656 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.505682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.505710 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.505734 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.609330 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.609659 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.609881 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.610134 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.610404 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.713848 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.714201 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.714433 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.714771 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.715153 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.747336 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 01:47:50.137034645 +0000 UTC Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.768295 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.768327 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:14 crc kubenswrapper[4758]: E0130 08:31:14.768501 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.768538 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:14 crc kubenswrapper[4758]: E0130 08:31:14.769130 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:14 crc kubenswrapper[4758]: E0130 08:31:14.769234 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.769880 4758 scope.go:117] "RemoveContainer" containerID="bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.819166 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.819611 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.819621 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.819638 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.819649 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.922695 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.922773 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.922794 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.922821 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.922846 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.029879 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.029934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.029951 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.030016 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.030049 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.133279 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.133346 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.133356 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.133392 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.133413 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.192384 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/2.log" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.201499 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.202860 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.225788 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.240293 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.240355 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.240367 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.240387 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.240399 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.251687 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.278162 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.298748 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.312791 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.330200 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.342945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.343004 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.343020 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.343087 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.343107 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.348900 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.367813 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8346f8b9-00a5-4192-aac9-4efed4127d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50869d10f4f4f4973ade69dd2e55d54e956644a0bf21aebdca1d742570dff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.402382 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.422870 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.436764 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.446422 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.446501 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.446518 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.446549 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.446568 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.452570 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.470531 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.485720 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.507487 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:31:03Z\\\",\\\"message\\\":\\\"2026-01-30T08:30:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc\\\\n2026-01-30T08:30:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc to /host/opt/cni/bin/\\\\n2026-01-30T08:30:18Z [verbose] multus-daemon started\\\\n2026-01-30T08:30:18Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:31:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:31:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.522913 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.544906 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.550180 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.550250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.550267 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.550296 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.550317 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.561750 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.579009 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.656966 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.657054 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.657070 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.657091 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.657107 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.747677 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 06:49:24.274417424 +0000 UTC Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.760089 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.760132 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.760141 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.760161 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.760172 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.768786 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:15 crc kubenswrapper[4758]: E0130 08:31:15.769106 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.785619 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.801309 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.819721 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.840634 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.857345 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.864291 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.864329 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.864348 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.864365 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.864378 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.875852 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.894332 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:31:03Z\\\",\\\"message\\\":\\\"2026-01-30T08:30:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc\\\\n2026-01-30T08:30:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc to /host/opt/cni/bin/\\\\n2026-01-30T08:30:18Z [verbose] multus-daemon started\\\\n2026-01-30T08:30:18Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:31:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:31:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.911478 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.930522 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.950456 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.964828 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.967709 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.967745 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.967758 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.967779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.967792 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.979601 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8346f8b9-00a5-4192-aac9-4efed4127d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50869d10f4f4f4973ade69dd2e55d54e956644a0bf21aebdca1d742570dff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.007682 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.024286 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.048656 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.070822 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.070863 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.070874 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.070896 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.070910 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.074120 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.095431 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.110998 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.128706 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.173480 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.173528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.173537 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.173554 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.173564 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.276775 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.276845 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.276863 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.276917 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.276936 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.380589 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.381079 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.381276 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.381427 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.381577 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.484553 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.484602 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.484612 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.484627 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.484638 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.587895 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.587941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.587953 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.587972 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.587983 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.691953 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.692393 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.692526 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.692708 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.692849 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.748581 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 23:23:22.135372563 +0000 UTC Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.768101 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.768135 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:16 crc kubenswrapper[4758]: E0130 08:31:16.768725 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:16 crc kubenswrapper[4758]: E0130 08:31:16.768943 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.768306 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:16 crc kubenswrapper[4758]: E0130 08:31:16.769546 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.796014 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.796067 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.796077 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.796090 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.796097 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.899743 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.899809 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.899824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.899846 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.899859 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.003688 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.003740 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.003754 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.003780 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.003795 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.107821 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.107914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.107945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.107985 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.108010 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.210928 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.211129 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.211160 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.211198 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.211226 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.212994 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/3.log" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.213767 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/2.log" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.217435 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540" exitCode=1 Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.217508 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.217577 4758 scope.go:117] "RemoveContainer" containerID="bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.219074 4758 scope.go:117] "RemoveContainer" containerID="8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540" Jan 30 08:31:17 crc kubenswrapper[4758]: E0130 08:31:17.219395 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.241486 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.258132 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8346f8b9-00a5-4192-aac9-4efed4127d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50869d10f4f4f4973ade69dd2e55d54e956644a0bf21aebdca1d742570dff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.282564 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.298502 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.310851 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.314622 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.314688 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.314705 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.314728 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.314743 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.328758 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.351674 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.366524 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.380894 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.407975 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.417624 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.417678 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.417691 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.417710 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.417722 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.426252 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.445674 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.465623 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.483804 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.503969 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:31:03Z\\\",\\\"message\\\":\\\"2026-01-30T08:30:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc\\\\n2026-01-30T08:30:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc to /host/opt/cni/bin/\\\\n2026-01-30T08:30:18Z [verbose] multus-daemon started\\\\n2026-01-30T08:30:18Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:31:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:31:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.520646 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.521231 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.521298 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.521311 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.521331 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.521358 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.541401 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.561565 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.585302 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:31:16Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08:31:16.166408 6702 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-d2cb9\\\\nI0130 08:31:16.166421 6702 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-d2cb9\\\\nI0130 08:31:16.166437 6702 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-d2cb9 in node crc\\\\nI0130 08:31:16.166449 6702 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-d2cb9 after 0 failed attempt(s)\\\\nI0130 08:31:16.166455 6702 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-d2cb9\\\\nI0130 08:31:16.166465 6702 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq\\\\nI0130 08:31:16.166470 6702 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq\\\\nI0130 08:31:16.166476 6702 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq in node crc\\\\nI0130 08:31:16.166481 6702 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq after \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.624294 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.624534 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.624689 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.624791 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.624875 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.728835 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.728905 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.728920 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.728945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.728960 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.749303 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 15:56:17.452510746 +0000 UTC Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.768400 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:17 crc kubenswrapper[4758]: E0130 08:31:17.768606 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.831694 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.831749 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.831759 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.831780 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.831797 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.934512 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.934553 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.934572 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.934597 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.934620 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.038467 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.038528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.038541 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.038568 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.038587 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.142452 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.142517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.142535 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.142564 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.142585 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.224497 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/3.log" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.246401 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.246439 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.246453 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.246540 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.246575 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.348697 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.349158 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.349352 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.349521 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.349675 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.452618 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.452663 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.452676 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.452696 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.452711 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.556725 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.556808 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.556828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.556873 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.556903 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.598614 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.598794 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:22.598761506 +0000 UTC m=+147.571073087 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.599402 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.599624 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.599708 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.599992 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:32:22.599972623 +0000 UTC m=+147.572284214 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.599845 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.600436 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:32:22.600418707 +0000 UTC m=+147.572730288 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.659430 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.659520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.659531 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.659550 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.659567 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.700877 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.700952 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.701216 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.701247 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.701267 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.701334 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:32:22.701313035 +0000 UTC m=+147.673624626 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.701577 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.701667 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.701737 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.701881 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:32:22.701859901 +0000 UTC m=+147.674171542 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.750947 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 03:48:51.699635549 +0000 UTC Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.761997 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.762129 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.762205 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.762282 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.762351 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.768527 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.768630 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.768677 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.768788 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.768925 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.769158 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.866101 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.866478 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.866574 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.866671 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.866764 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.969299 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.969522 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.969582 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.969647 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.969712 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.071920 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.071966 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.071979 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.071996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.072011 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.174018 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.174070 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.174078 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.174090 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.174098 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.276067 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.276164 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.276179 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.276196 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.276207 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.378347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.378413 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.378422 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.378436 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.378446 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.480884 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.480925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.480939 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.480957 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.480971 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.582803 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.582844 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.582867 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.582884 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.582896 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.685601 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.685649 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.685663 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.685683 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.685696 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.752694 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:45:00.588650652 +0000 UTC Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.768263 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:19 crc kubenswrapper[4758]: E0130 08:31:19.768418 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.787523 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.787563 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.787573 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.787588 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.787598 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.889336 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.889384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.889396 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.889412 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.889423 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.992355 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.992601 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.992701 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.992814 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.992891 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.095860 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.096208 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.096317 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.096421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.096565 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.198887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.198923 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.198932 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.198946 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.198955 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.301417 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.301703 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.301792 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.301887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.302022 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.404632 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.404671 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.404682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.404700 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.404711 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.506292 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.506327 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.506335 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.506350 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.506360 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.609220 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.609515 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.609527 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.609545 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.609558 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.711806 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.711840 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.711848 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.711862 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.711871 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.753100 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 19:13:01.636635978 +0000 UTC Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.768456 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.768536 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.768577 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:20 crc kubenswrapper[4758]: E0130 08:31:20.768632 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:20 crc kubenswrapper[4758]: E0130 08:31:20.768696 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:20 crc kubenswrapper[4758]: E0130 08:31:20.768838 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.813914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.813941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.813948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.813960 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.813968 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.916350 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.916393 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.916408 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.916424 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.916433 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.018992 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.019065 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.019083 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.019100 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.019107 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.120572 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.120601 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.120608 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.120621 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.120629 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.223088 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.223132 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.223144 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.223161 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.223173 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.325735 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.325764 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.325773 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.325785 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.325793 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.427547 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.427587 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.427596 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.427610 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.427619 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.529749 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.529781 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.529790 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.529806 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.529816 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.631698 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.631732 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.631741 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.631757 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.631766 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.733872 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.734011 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.734085 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.734114 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.734131 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.754217 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 07:02:30.765953748 +0000 UTC Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.768657 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:21 crc kubenswrapper[4758]: E0130 08:31:21.769228 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.836568 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.836609 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.836620 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.836634 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.836645 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.938954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.939009 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.939021 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.939039 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.939050 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.041619 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.041658 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.041675 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.041690 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.041700 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.143525 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.143569 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.143577 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.143595 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.143604 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.245655 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.245696 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.245705 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.245721 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.245732 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.348494 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.348551 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.348566 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.348588 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.348604 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.429300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.429337 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.429347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.429364 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.429375 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.442770 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.447199 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.447259 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.447272 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.447310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.447324 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.459490 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.462982 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.463049 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.463062 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.463133 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.463149 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.475512 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.478778 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.478809 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.478820 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.478833 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.478841 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.491796 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.495305 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.495338 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.495350 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.495365 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.495377 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.508354 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.508505 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.509858 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.509891 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.509924 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.509939 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.509951 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.611592 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.611633 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.611645 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.611662 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.611674 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.714310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.714359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.714375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.714395 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.714409 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.754996 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:55:03.538813139 +0000 UTC Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.768407 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.768447 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.768528 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.768648 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.768749 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.768813 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.817171 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.817208 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.817220 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.817234 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.817243 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.918582 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.918620 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.918631 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.918646 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.918658 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.020582 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.020613 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.020621 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.020634 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.020643 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.123148 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.123277 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.123290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.123306 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.123317 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.226185 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.226225 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.226235 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.226250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.226261 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.328382 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.328422 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.328434 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.328450 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.328461 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.430759 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.430802 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.430812 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.430828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.430842 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.532865 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.532912 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.532920 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.532934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.532943 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.634865 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.634910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.634921 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.634937 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.634948 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.737461 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.737500 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.737507 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.737520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.737528 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.755269 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 16:07:53.63138276 +0000 UTC Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.767955 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:23 crc kubenswrapper[4758]: E0130 08:31:23.768103 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.839978 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.840014 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.840022 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.840057 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.840070 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.943104 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.943159 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.943168 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.943182 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.943192 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.045798 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.045824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.045834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.045847 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.045856 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.147793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.147837 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.147850 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.147867 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.147879 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.250484 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.250525 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.250536 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.250552 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.250564 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.353951 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.354015 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.354076 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.354108 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.354131 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.456512 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.456555 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.456569 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.456589 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.456602 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.559354 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.559399 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.559413 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.559433 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.559447 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.661337 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.661398 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.661430 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.661447 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.661459 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.755633 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:33:04.707023596 +0000 UTC Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.763567 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.763732 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.763820 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.763918 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.764084 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.768069 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.768111 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:24 crc kubenswrapper[4758]: E0130 08:31:24.768164 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.768068 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:24 crc kubenswrapper[4758]: E0130 08:31:24.768277 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:24 crc kubenswrapper[4758]: E0130 08:31:24.768311 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.865827 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.865861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.865872 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.865887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.865897 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.967710 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.967754 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.967765 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.967783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.967796 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.070415 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.070455 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.070471 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.070492 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.070508 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.174030 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.174136 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.174156 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.174179 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.174196 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.276438 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.276511 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.276532 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.276555 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.276571 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.379077 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.379121 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.379133 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.379150 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.379161 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.481114 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.481150 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.481162 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.481177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.481191 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.583643 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.583969 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.584211 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.584478 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.584908 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.687254 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.687321 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.687345 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.687375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.687396 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.756117 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 05:59:47.619297803 +0000 UTC Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.767601 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:25 crc kubenswrapper[4758]: E0130 08:31:25.768027 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.779961 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.789706 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.789734 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.789743 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.789756 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.789765 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.791673 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8346f8b9-00a5-4192-aac9-4efed4127d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50869d10f4f4f4973ade69dd2e55d54e956644a0bf21aebdca1d742570dff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.814402 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.829635 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.839359 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.851468 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.862617 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.874285 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.893671 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.893696 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.893705 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.893718 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.893726 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.913405 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.913384529 podStartE2EDuration="1m11.913384529s" podCreationTimestamp="2026-01-30 08:30:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:25.910637816 +0000 UTC m=+90.882949367" watchObservedRunningTime="2026-01-30 08:31:25.913384529 +0000 UTC m=+90.885696080" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.958392 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podStartSLOduration=70.958372002 podStartE2EDuration="1m10.958372002s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:25.957067063 +0000 UTC m=+90.929378614" watchObservedRunningTime="2026-01-30 08:31:25.958372002 +0000 UTC m=+90.930683563" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.982477 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-99ddw" podStartSLOduration=70.982462627 podStartE2EDuration="1m10.982462627s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:25.97074804 +0000 UTC m=+90.943059601" watchObservedRunningTime="2026-01-30 08:31:25.982462627 +0000 UTC m=+90.954774178" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.995758 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.996068 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.996198 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.996303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.996404 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.003186 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.003168419 podStartE2EDuration="1m6.003168419s" podCreationTimestamp="2026-01-30 08:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:26.002666374 +0000 UTC m=+90.974977925" watchObservedRunningTime="2026-01-30 08:31:26.003168419 +0000 UTC m=+90.975479980" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.003740 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" podStartSLOduration=71.003732826 podStartE2EDuration="1m11.003732826s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:25.986941454 +0000 UTC m=+90.959253005" watchObservedRunningTime="2026-01-30 08:31:26.003732826 +0000 UTC m=+90.976044377" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.098368 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.098408 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.098418 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.098450 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.098463 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.200098 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.200333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.200432 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.200528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.200611 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.303159 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.303394 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.303455 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.303519 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.303651 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.405797 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.405825 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.405836 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.405852 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.405862 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.507884 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.508169 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.508244 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.508315 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.508399 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.610119 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.610375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.610453 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.610534 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.610611 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.713147 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.713185 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.713196 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.713210 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.713220 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.757055 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 05:25:22.909632842 +0000 UTC Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.768426 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.768465 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.768465 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:26 crc kubenswrapper[4758]: E0130 08:31:26.768580 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:26 crc kubenswrapper[4758]: E0130 08:31:26.768719 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:26 crc kubenswrapper[4758]: E0130 08:31:26.768806 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.818914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.818975 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.818987 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.819013 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.819025 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.921416 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.921460 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.921473 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.921490 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.921499 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.023752 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.023793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.023805 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.023823 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.023834 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.126312 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.126347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.126355 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.126370 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.126381 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.228971 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.229002 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.229011 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.229024 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.229032 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.331082 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.331119 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.331127 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.331139 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.331149 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.433757 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.433818 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.433833 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.433849 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.433862 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.536132 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.536174 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.536183 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.536198 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.536211 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.637834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.637881 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.637889 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.637903 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.637914 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.740311 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.740349 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.740359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.740372 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.740381 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.757822 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 11:11:14.305386499 +0000 UTC Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.768319 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:27 crc kubenswrapper[4758]: E0130 08:31:27.768440 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.769740 4758 scope.go:117] "RemoveContainer" containerID="8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540" Jan 30 08:31:27 crc kubenswrapper[4758]: E0130 08:31:27.769988 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.784934 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lnh2g" podStartSLOduration=72.784910919 podStartE2EDuration="1m12.784910919s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:27.784540068 +0000 UTC m=+92.756851629" watchObservedRunningTime="2026-01-30 08:31:27.784910919 +0000 UTC m=+92.757222500" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.842166 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.842220 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.842237 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.842260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.842277 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.869398 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=70.869375716 podStartE2EDuration="1m10.869375716s" podCreationTimestamp="2026-01-30 08:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:27.868568051 +0000 UTC m=+92.840879612" watchObservedRunningTime="2026-01-30 08:31:27.869375716 +0000 UTC m=+92.841687307" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.870484 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=20.870475669 podStartE2EDuration="20.870475669s" podCreationTimestamp="2026-01-30 08:31:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:27.841473425 +0000 UTC m=+92.813784976" watchObservedRunningTime="2026-01-30 08:31:27.870475669 +0000 UTC m=+92.842787240" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.894362 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8z796" podStartSLOduration=72.894335758 podStartE2EDuration="1m12.894335758s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:27.89344643 +0000 UTC m=+92.865757981" watchObservedRunningTime="2026-01-30 08:31:27.894335758 +0000 UTC m=+92.866647309" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.914602 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" podStartSLOduration=72.914582265 podStartE2EDuration="1m12.914582265s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:27.913944926 +0000 UTC m=+92.886256517" watchObservedRunningTime="2026-01-30 08:31:27.914582265 +0000 UTC m=+92.886893826" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.930983 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.930964765 podStartE2EDuration="45.930964765s" podCreationTimestamp="2026-01-30 08:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:27.928429448 +0000 UTC m=+92.900741009" watchObservedRunningTime="2026-01-30 08:31:27.930964765 +0000 UTC m=+92.903276316" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.945032 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.945104 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.945115 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.945136 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.945148 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.048442 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.048507 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.048524 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.048551 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.048573 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.152410 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.152498 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.152524 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.152575 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.152602 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.256812 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.256873 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.256891 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.256920 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.256939 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.360447 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.360512 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.360530 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.360559 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.360579 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.464290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.464369 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.464393 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.464424 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.464447 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.568124 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.568192 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.568210 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.568239 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.568257 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.672686 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.672745 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.672759 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.672782 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.672797 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.758600 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 04:40:15.046418752 +0000 UTC Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.768120 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.768175 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.768122 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:28 crc kubenswrapper[4758]: E0130 08:31:28.768285 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:28 crc kubenswrapper[4758]: E0130 08:31:28.768374 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:28 crc kubenswrapper[4758]: E0130 08:31:28.768498 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.776154 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.776242 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.776263 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.776290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.776307 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.878468 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.878506 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.878515 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.878530 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.878540 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.981765 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.981834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.981849 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.981872 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.981887 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.084871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.084934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.084949 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.084970 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.084985 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.188250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.188313 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.188328 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.188347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.188358 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.291613 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.291680 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.291699 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.291726 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.291744 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.395177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.395232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.395260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.395282 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.395295 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.538379 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.538448 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.538472 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.538504 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.538528 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.641934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.641986 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.642009 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.642062 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.642082 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.745319 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.745384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.745404 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.745431 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.745451 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.759739 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 14:20:04.029996206 +0000 UTC Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.769387 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:29 crc kubenswrapper[4758]: E0130 08:31:29.769605 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.848017 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.848099 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.848118 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.848145 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.848165 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.951948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.951996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.952013 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.952043 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.952085 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.054666 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.054777 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.054792 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.054817 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.054830 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.157245 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.157280 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.157293 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.157308 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.157318 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.260393 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.260439 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.260451 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.260465 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.260478 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.364169 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.364250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.364269 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.364300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.364319 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.467223 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.467269 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.467284 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.467305 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.467320 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.570437 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.570488 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.570504 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.570520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.570532 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.672591 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.672637 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.672651 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.672671 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.672685 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.760537 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 09:27:02.496845005 +0000 UTC Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.767851 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.767881 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.767975 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:30 crc kubenswrapper[4758]: E0130 08:31:30.768076 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:30 crc kubenswrapper[4758]: E0130 08:31:30.768189 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:30 crc kubenswrapper[4758]: E0130 08:31:30.768248 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.775120 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.775149 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.775158 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.775172 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.775184 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.877627 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.877682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.877698 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.877721 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.877739 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.980318 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.980357 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.980370 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.980386 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.980396 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.082485 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.082536 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.082553 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.082575 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.082594 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.185932 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.185969 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.185979 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.185995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.186005 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.288848 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.288911 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.288924 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.288944 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.288957 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.392366 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.392406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.392423 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.392445 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.392464 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.495199 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.495233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.495246 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.495261 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.495271 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.597957 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.597987 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.597996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.598011 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.598029 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.700573 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.700604 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.700613 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.700627 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.700637 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.761583 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:31:47.557828449 +0000 UTC Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.768250 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:31 crc kubenswrapper[4758]: E0130 08:31:31.768391 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.803749 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.803793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.803806 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.803822 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.803834 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.906403 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.906442 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.906453 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.906473 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.906486 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.009014 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.009096 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.009110 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.009132 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.009151 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:32Z","lastTransitionTime":"2026-01-30T08:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.113007 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.113127 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.113152 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.113187 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.113216 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:32Z","lastTransitionTime":"2026-01-30T08:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.217461 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.217555 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.217575 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.217606 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.217628 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:32Z","lastTransitionTime":"2026-01-30T08:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.321698 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.321779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.321798 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.321829 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.321857 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:32Z","lastTransitionTime":"2026-01-30T08:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.434530 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.434625 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.434647 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.434678 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.434699 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:32Z","lastTransitionTime":"2026-01-30T08:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.531623 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.531687 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.531705 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.531734 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.531755 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:32Z","lastTransitionTime":"2026-01-30T08:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.607732 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9"] Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.608622 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.612237 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.613433 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.616156 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.617267 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.666781 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/768774e5-5df7-470d-a77b-593638e3a4ef-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.666878 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/768774e5-5df7-470d-a77b-593638e3a4ef-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.666940 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/768774e5-5df7-470d-a77b-593638e3a4ef-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.666973 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/768774e5-5df7-470d-a77b-593638e3a4ef-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.667075 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/768774e5-5df7-470d-a77b-593638e3a4ef-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.762576 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 04:15:34.783933446 +0000 UTC Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.762687 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.767769 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/768774e5-5df7-470d-a77b-593638e3a4ef-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.767769 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.767919 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/768774e5-5df7-470d-a77b-593638e3a4ef-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.767974 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/768774e5-5df7-470d-a77b-593638e3a4ef-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.768090 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/768774e5-5df7-470d-a77b-593638e3a4ef-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.768142 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/768774e5-5df7-470d-a77b-593638e3a4ef-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: E0130 08:31:32.768180 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.768406 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/768774e5-5df7-470d-a77b-593638e3a4ef-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.767789 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.768539 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/768774e5-5df7-470d-a77b-593638e3a4ef-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: E0130 08:31:32.768638 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.769685 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:32 crc kubenswrapper[4758]: E0130 08:31:32.769882 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.770120 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/768774e5-5df7-470d-a77b-593638e3a4ef-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.777553 4758 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.789339 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/768774e5-5df7-470d-a77b-593638e3a4ef-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.800990 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/768774e5-5df7-470d-a77b-593638e3a4ef-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.930836 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:33 crc kubenswrapper[4758]: I0130 08:31:33.278176 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" event={"ID":"768774e5-5df7-470d-a77b-593638e3a4ef","Type":"ContainerStarted","Data":"ca565fbed1ef3c3da3074606a8709d22a9c1b470e012567a28f57f1e8dee4ce4"} Jan 30 08:31:33 crc kubenswrapper[4758]: I0130 08:31:33.278289 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" event={"ID":"768774e5-5df7-470d-a77b-593638e3a4ef","Type":"ContainerStarted","Data":"3874c30d204eb3c13fef0f9026749f04ec416e00f6b2f287573a49aa551462c8"} Jan 30 08:31:33 crc kubenswrapper[4758]: I0130 08:31:33.768663 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:33 crc kubenswrapper[4758]: E0130 08:31:33.768964 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:34 crc kubenswrapper[4758]: I0130 08:31:34.079908 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:34 crc kubenswrapper[4758]: E0130 08:31:34.080234 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:31:34 crc kubenswrapper[4758]: E0130 08:31:34.080752 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs podName:83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4 nodeName:}" failed. No retries permitted until 2026-01-30 08:32:38.08071997 +0000 UTC m=+163.053031561 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs") pod "network-metrics-daemon-gj6b4" (UID: "83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:31:34 crc kubenswrapper[4758]: I0130 08:31:34.768533 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:34 crc kubenswrapper[4758]: E0130 08:31:34.768710 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:34 crc kubenswrapper[4758]: I0130 08:31:34.768796 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:34 crc kubenswrapper[4758]: E0130 08:31:34.769163 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:34 crc kubenswrapper[4758]: I0130 08:31:34.769621 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:34 crc kubenswrapper[4758]: E0130 08:31:34.769858 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:35 crc kubenswrapper[4758]: I0130 08:31:35.768402 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:35 crc kubenswrapper[4758]: E0130 08:31:35.769909 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:36 crc kubenswrapper[4758]: I0130 08:31:36.768172 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:36 crc kubenswrapper[4758]: I0130 08:31:36.768238 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:36 crc kubenswrapper[4758]: E0130 08:31:36.768341 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:36 crc kubenswrapper[4758]: I0130 08:31:36.768512 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:36 crc kubenswrapper[4758]: E0130 08:31:36.768685 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:36 crc kubenswrapper[4758]: E0130 08:31:36.768860 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:37 crc kubenswrapper[4758]: I0130 08:31:37.768343 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:37 crc kubenswrapper[4758]: E0130 08:31:37.768468 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:38 crc kubenswrapper[4758]: I0130 08:31:38.768083 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:38 crc kubenswrapper[4758]: I0130 08:31:38.768083 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:38 crc kubenswrapper[4758]: I0130 08:31:38.768252 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:38 crc kubenswrapper[4758]: E0130 08:31:38.768410 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:38 crc kubenswrapper[4758]: E0130 08:31:38.768562 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:38 crc kubenswrapper[4758]: E0130 08:31:38.768657 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:39 crc kubenswrapper[4758]: I0130 08:31:39.768414 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:39 crc kubenswrapper[4758]: E0130 08:31:39.768552 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:40 crc kubenswrapper[4758]: I0130 08:31:40.768068 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:40 crc kubenswrapper[4758]: I0130 08:31:40.768141 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:40 crc kubenswrapper[4758]: I0130 08:31:40.768237 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:40 crc kubenswrapper[4758]: E0130 08:31:40.768315 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:40 crc kubenswrapper[4758]: E0130 08:31:40.768435 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:40 crc kubenswrapper[4758]: E0130 08:31:40.768526 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:41 crc kubenswrapper[4758]: I0130 08:31:41.770338 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:41 crc kubenswrapper[4758]: E0130 08:31:41.770544 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:41 crc kubenswrapper[4758]: I0130 08:31:41.771574 4758 scope.go:117] "RemoveContainer" containerID="8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540" Jan 30 08:31:41 crc kubenswrapper[4758]: E0130 08:31:41.772075 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:31:42 crc kubenswrapper[4758]: I0130 08:31:42.768447 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:42 crc kubenswrapper[4758]: I0130 08:31:42.768509 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:42 crc kubenswrapper[4758]: I0130 08:31:42.768467 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:42 crc kubenswrapper[4758]: E0130 08:31:42.768701 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:42 crc kubenswrapper[4758]: E0130 08:31:42.768805 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:42 crc kubenswrapper[4758]: E0130 08:31:42.768961 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:43 crc kubenswrapper[4758]: I0130 08:31:43.768147 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:43 crc kubenswrapper[4758]: E0130 08:31:43.768330 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:44 crc kubenswrapper[4758]: I0130 08:31:44.768296 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:44 crc kubenswrapper[4758]: E0130 08:31:44.768510 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:44 crc kubenswrapper[4758]: I0130 08:31:44.768539 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:44 crc kubenswrapper[4758]: E0130 08:31:44.768821 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:44 crc kubenswrapper[4758]: I0130 08:31:44.769179 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:44 crc kubenswrapper[4758]: E0130 08:31:44.769257 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:45 crc kubenswrapper[4758]: I0130 08:31:45.768358 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:45 crc kubenswrapper[4758]: E0130 08:31:45.769755 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:46 crc kubenswrapper[4758]: I0130 08:31:46.768309 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:46 crc kubenswrapper[4758]: I0130 08:31:46.768360 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:46 crc kubenswrapper[4758]: E0130 08:31:46.768423 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:46 crc kubenswrapper[4758]: I0130 08:31:46.768314 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:46 crc kubenswrapper[4758]: E0130 08:31:46.768594 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:46 crc kubenswrapper[4758]: E0130 08:31:46.768663 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:47 crc kubenswrapper[4758]: I0130 08:31:47.768066 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:47 crc kubenswrapper[4758]: E0130 08:31:47.768348 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:48 crc kubenswrapper[4758]: I0130 08:31:48.768090 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:48 crc kubenswrapper[4758]: I0130 08:31:48.768121 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:48 crc kubenswrapper[4758]: E0130 08:31:48.768195 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:48 crc kubenswrapper[4758]: I0130 08:31:48.768100 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:48 crc kubenswrapper[4758]: E0130 08:31:48.768251 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:48 crc kubenswrapper[4758]: E0130 08:31:48.768277 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:49 crc kubenswrapper[4758]: I0130 08:31:49.768543 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:49 crc kubenswrapper[4758]: E0130 08:31:49.769268 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.342955 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/1.log" Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.343979 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/0.log" Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.344093 4758 generic.go:334] "Generic (PLEG): container finished" podID="fac75e9c-fc94-4c83-8613-bce0f4744079" containerID="e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1" exitCode=1 Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.344137 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99ddw" event={"ID":"fac75e9c-fc94-4c83-8613-bce0f4744079","Type":"ContainerDied","Data":"e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1"} Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.344187 4758 scope.go:117] "RemoveContainer" containerID="0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb" Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.345490 4758 scope.go:117] "RemoveContainer" containerID="e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1" Jan 30 08:31:50 crc kubenswrapper[4758]: E0130 08:31:50.346100 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-99ddw_openshift-multus(fac75e9c-fc94-4c83-8613-bce0f4744079)\"" pod="openshift-multus/multus-99ddw" podUID="fac75e9c-fc94-4c83-8613-bce0f4744079" Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.376194 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" podStartSLOduration=95.376173744 podStartE2EDuration="1m35.376173744s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:33.31441611 +0000 UTC m=+98.286727751" watchObservedRunningTime="2026-01-30 08:31:50.376173744 +0000 UTC m=+115.348485315" Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.768122 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.768122 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:50 crc kubenswrapper[4758]: E0130 08:31:50.768656 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:50 crc kubenswrapper[4758]: E0130 08:31:50.768814 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.768120 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:50 crc kubenswrapper[4758]: E0130 08:31:50.769183 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:51 crc kubenswrapper[4758]: I0130 08:31:51.351998 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/1.log" Jan 30 08:31:51 crc kubenswrapper[4758]: I0130 08:31:51.768298 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:51 crc kubenswrapper[4758]: E0130 08:31:51.768471 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:52 crc kubenswrapper[4758]: I0130 08:31:52.767788 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:52 crc kubenswrapper[4758]: I0130 08:31:52.767971 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:52 crc kubenswrapper[4758]: I0130 08:31:52.768178 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:52 crc kubenswrapper[4758]: E0130 08:31:52.768169 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:52 crc kubenswrapper[4758]: E0130 08:31:52.768343 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:52 crc kubenswrapper[4758]: E0130 08:31:52.768654 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:53 crc kubenswrapper[4758]: I0130 08:31:53.768270 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:53 crc kubenswrapper[4758]: E0130 08:31:53.768738 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:53 crc kubenswrapper[4758]: I0130 08:31:53.769310 4758 scope.go:117] "RemoveContainer" containerID="8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540" Jan 30 08:31:53 crc kubenswrapper[4758]: E0130 08:31:53.769435 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:31:54 crc kubenswrapper[4758]: I0130 08:31:54.768061 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:54 crc kubenswrapper[4758]: I0130 08:31:54.768195 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:54 crc kubenswrapper[4758]: E0130 08:31:54.768237 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:54 crc kubenswrapper[4758]: I0130 08:31:54.768317 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:54 crc kubenswrapper[4758]: E0130 08:31:54.768548 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:54 crc kubenswrapper[4758]: E0130 08:31:54.768738 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:55 crc kubenswrapper[4758]: E0130 08:31:55.716303 4758 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 08:31:55 crc kubenswrapper[4758]: I0130 08:31:55.767755 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:55 crc kubenswrapper[4758]: E0130 08:31:55.769463 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:55 crc kubenswrapper[4758]: E0130 08:31:55.868194 4758 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 08:31:56 crc kubenswrapper[4758]: I0130 08:31:56.767773 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:56 crc kubenswrapper[4758]: I0130 08:31:56.767923 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:56 crc kubenswrapper[4758]: E0130 08:31:56.767955 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:56 crc kubenswrapper[4758]: I0130 08:31:56.767783 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:56 crc kubenswrapper[4758]: E0130 08:31:56.768217 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:56 crc kubenswrapper[4758]: E0130 08:31:56.768305 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:57 crc kubenswrapper[4758]: I0130 08:31:57.768789 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:57 crc kubenswrapper[4758]: E0130 08:31:57.769022 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:58 crc kubenswrapper[4758]: I0130 08:31:58.767982 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:58 crc kubenswrapper[4758]: I0130 08:31:58.768090 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:58 crc kubenswrapper[4758]: I0130 08:31:58.768601 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:58 crc kubenswrapper[4758]: E0130 08:31:58.768764 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:58 crc kubenswrapper[4758]: E0130 08:31:58.768929 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:58 crc kubenswrapper[4758]: E0130 08:31:58.769204 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:59 crc kubenswrapper[4758]: I0130 08:31:59.768324 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:59 crc kubenswrapper[4758]: E0130 08:31:59.769231 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:00 crc kubenswrapper[4758]: I0130 08:32:00.767821 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:00 crc kubenswrapper[4758]: E0130 08:32:00.768006 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:32:00 crc kubenswrapper[4758]: I0130 08:32:00.768332 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:00 crc kubenswrapper[4758]: I0130 08:32:00.768389 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:00 crc kubenswrapper[4758]: E0130 08:32:00.768549 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:32:00 crc kubenswrapper[4758]: E0130 08:32:00.768692 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:32:00 crc kubenswrapper[4758]: E0130 08:32:00.869704 4758 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 08:32:01 crc kubenswrapper[4758]: I0130 08:32:01.768217 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:01 crc kubenswrapper[4758]: E0130 08:32:01.768439 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:02 crc kubenswrapper[4758]: I0130 08:32:02.767645 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:02 crc kubenswrapper[4758]: I0130 08:32:02.767778 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:02 crc kubenswrapper[4758]: E0130 08:32:02.767851 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:32:02 crc kubenswrapper[4758]: I0130 08:32:02.767950 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:02 crc kubenswrapper[4758]: E0130 08:32:02.768123 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:32:02 crc kubenswrapper[4758]: E0130 08:32:02.768191 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:32:03 crc kubenswrapper[4758]: I0130 08:32:03.768287 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:03 crc kubenswrapper[4758]: E0130 08:32:03.768517 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:04 crc kubenswrapper[4758]: I0130 08:32:04.767778 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:04 crc kubenswrapper[4758]: I0130 08:32:04.767931 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:04 crc kubenswrapper[4758]: E0130 08:32:04.768075 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:32:04 crc kubenswrapper[4758]: I0130 08:32:04.768158 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:04 crc kubenswrapper[4758]: E0130 08:32:04.768257 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:32:04 crc kubenswrapper[4758]: E0130 08:32:04.768494 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:32:04 crc kubenswrapper[4758]: I0130 08:32:04.768951 4758 scope.go:117] "RemoveContainer" containerID="e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1" Jan 30 08:32:05 crc kubenswrapper[4758]: I0130 08:32:05.412244 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/1.log" Jan 30 08:32:05 crc kubenswrapper[4758]: I0130 08:32:05.412892 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99ddw" event={"ID":"fac75e9c-fc94-4c83-8613-bce0f4744079","Type":"ContainerStarted","Data":"52cb65a07b895a3f9c811e540c2852dc09469aa1336caa2d4f74c566cc414a19"} Jan 30 08:32:05 crc kubenswrapper[4758]: I0130 08:32:05.768570 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:05 crc kubenswrapper[4758]: E0130 08:32:05.769972 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:05 crc kubenswrapper[4758]: E0130 08:32:05.871016 4758 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 08:32:06 crc kubenswrapper[4758]: I0130 08:32:06.767522 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:06 crc kubenswrapper[4758]: E0130 08:32:06.767663 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:32:06 crc kubenswrapper[4758]: I0130 08:32:06.767522 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:06 crc kubenswrapper[4758]: E0130 08:32:06.767756 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:32:06 crc kubenswrapper[4758]: I0130 08:32:06.767524 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:06 crc kubenswrapper[4758]: E0130 08:32:06.767855 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:32:07 crc kubenswrapper[4758]: I0130 08:32:07.767917 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:07 crc kubenswrapper[4758]: E0130 08:32:07.768132 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:08 crc kubenswrapper[4758]: I0130 08:32:08.767802 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:08 crc kubenswrapper[4758]: I0130 08:32:08.767832 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:08 crc kubenswrapper[4758]: I0130 08:32:08.767903 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:08 crc kubenswrapper[4758]: E0130 08:32:08.768028 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:32:08 crc kubenswrapper[4758]: E0130 08:32:08.768170 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:32:08 crc kubenswrapper[4758]: E0130 08:32:08.768575 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:32:08 crc kubenswrapper[4758]: I0130 08:32:08.768897 4758 scope.go:117] "RemoveContainer" containerID="8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540" Jan 30 08:32:09 crc kubenswrapper[4758]: I0130 08:32:09.425841 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/3.log" Jan 30 08:32:09 crc kubenswrapper[4758]: I0130 08:32:09.428531 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"b0101d7301a7b51eb2ab1a83d9b6004c067fa83b3b027df6dd62ac9569ce0353"} Jan 30 08:32:09 crc kubenswrapper[4758]: I0130 08:32:09.428912 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:32:09 crc kubenswrapper[4758]: I0130 08:32:09.455664 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podStartSLOduration=114.455644914 podStartE2EDuration="1m54.455644914s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:09.45265563 +0000 UTC m=+134.424967211" watchObservedRunningTime="2026-01-30 08:32:09.455644914 +0000 UTC m=+134.427956475" Jan 30 08:32:09 crc kubenswrapper[4758]: I0130 08:32:09.725703 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gj6b4"] Jan 30 08:32:09 crc kubenswrapper[4758]: I0130 08:32:09.725868 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:09 crc kubenswrapper[4758]: E0130 08:32:09.726071 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:10 crc kubenswrapper[4758]: I0130 08:32:10.768279 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:10 crc kubenswrapper[4758]: E0130 08:32:10.768603 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:32:10 crc kubenswrapper[4758]: I0130 08:32:10.768298 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:10 crc kubenswrapper[4758]: I0130 08:32:10.768335 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:10 crc kubenswrapper[4758]: E0130 08:32:10.768757 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:32:10 crc kubenswrapper[4758]: E0130 08:32:10.768810 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:32:10 crc kubenswrapper[4758]: E0130 08:32:10.872655 4758 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 08:32:11 crc kubenswrapper[4758]: I0130 08:32:11.768390 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:11 crc kubenswrapper[4758]: E0130 08:32:11.768542 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:12 crc kubenswrapper[4758]: I0130 08:32:12.767697 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:12 crc kubenswrapper[4758]: I0130 08:32:12.767809 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:12 crc kubenswrapper[4758]: I0130 08:32:12.767728 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:12 crc kubenswrapper[4758]: E0130 08:32:12.767894 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:32:12 crc kubenswrapper[4758]: E0130 08:32:12.768123 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:32:12 crc kubenswrapper[4758]: E0130 08:32:12.768203 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:32:13 crc kubenswrapper[4758]: I0130 08:32:13.767770 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:13 crc kubenswrapper[4758]: E0130 08:32:13.767974 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:14 crc kubenswrapper[4758]: I0130 08:32:14.768479 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:14 crc kubenswrapper[4758]: E0130 08:32:14.768624 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:32:14 crc kubenswrapper[4758]: I0130 08:32:14.768703 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:14 crc kubenswrapper[4758]: E0130 08:32:14.768913 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:32:14 crc kubenswrapper[4758]: I0130 08:32:14.769274 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:14 crc kubenswrapper[4758]: E0130 08:32:14.769422 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:32:15 crc kubenswrapper[4758]: I0130 08:32:15.767622 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:15 crc kubenswrapper[4758]: E0130 08:32:15.769580 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:16 crc kubenswrapper[4758]: I0130 08:32:16.767993 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:16 crc kubenswrapper[4758]: I0130 08:32:16.768169 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:16 crc kubenswrapper[4758]: I0130 08:32:16.768384 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:16 crc kubenswrapper[4758]: I0130 08:32:16.772415 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 08:32:16 crc kubenswrapper[4758]: I0130 08:32:16.772690 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 08:32:16 crc kubenswrapper[4758]: I0130 08:32:16.780875 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 08:32:16 crc kubenswrapper[4758]: I0130 08:32:16.781137 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 08:32:17 crc kubenswrapper[4758]: I0130 08:32:17.767723 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:17 crc kubenswrapper[4758]: I0130 08:32:17.769948 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 08:32:17 crc kubenswrapper[4758]: I0130 08:32:17.770006 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 08:32:19 crc kubenswrapper[4758]: I0130 08:32:19.241586 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.387299 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.387358 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.670430 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.670535 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:22 crc kubenswrapper[4758]: E0130 08:32:22.670599 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:34:24.670568542 +0000 UTC m=+269.642880123 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.670704 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.671888 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.678599 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.772095 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.772192 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.780414 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.782553 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.804853 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.827273 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.838968 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:23 crc kubenswrapper[4758]: I0130 08:32:23.477575 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bdcb25005d1c2bb73f54b7d40b3b8ae6fb360057c8ed105fd97cf1f6bbc587f9"} Jan 30 08:32:23 crc kubenswrapper[4758]: I0130 08:32:23.477916 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"932143a6e8b2a22127122c1b00ee8c2202282b86f76c6623cb309ae5f2a8cf55"} Jan 30 08:32:23 crc kubenswrapper[4758]: I0130 08:32:23.479408 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"99434f6bf1ff80dce949575876654f2681af5bc06a1b726dc50de372aac982fb"} Jan 30 08:32:23 crc kubenswrapper[4758]: I0130 08:32:23.479491 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"df9f6c7b0c4f7528fd08a093c08256243e0150460509444ccbe86889d742ca6f"} Jan 30 08:32:23 crc kubenswrapper[4758]: I0130 08:32:23.479764 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:23 crc kubenswrapper[4758]: I0130 08:32:23.480506 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1d79978374fca6fe6a57a8596223fcc9c30b1e4e525a1c5be28be07f4c3db244"} Jan 30 08:32:23 crc kubenswrapper[4758]: I0130 08:32:23.480536 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a1f32a3da5a84270f92e09e66774efe0dade64b3ef0562aa0ce864e43a588c03"} Jan 30 08:32:23 crc kubenswrapper[4758]: I0130 08:32:23.960235 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.007935 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hrqb6"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.008822 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.014492 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.014577 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.014595 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.018515 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.018874 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.023716 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.023791 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.023812 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.024362 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.024402 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.025021 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9kt48"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.026165 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.046994 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.048452 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.048955 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.049111 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.049248 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.050334 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.060228 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.060917 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q592c"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.061696 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.064693 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.065077 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.066626 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.066866 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kljqw"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.067319 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.068548 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ssdl5"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.069218 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.069383 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.070240 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.072584 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.073227 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.073974 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.078160 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-rxgh6"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.078912 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.079396 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.079439 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.079935 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.080057 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.082232 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8zkrv"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.082822 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.082841 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.083498 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.084861 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-zp6d8"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085245 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zp6d8" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085812 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-config\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085836 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085860 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg4sc\" (UniqueName: \"kubernetes.io/projected/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-kube-api-access-gg4sc\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085880 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-etcd-client\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085900 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-config\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085918 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b5551b0-f51c-43c9-bf60-e191132339fe-serving-cert\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085936 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-audit\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085954 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-image-import-ca\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085973 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6shg\" (UniqueName: \"kubernetes.io/projected/31897db9-9dd1-42a9-8eae-b5e13e113a3c-kube-api-access-m6shg\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085991 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/814885ca-d12b-49a3-a788-4648517a1c23-audit-dir\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086022 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-images\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086057 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-config\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086077 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-etcd-serving-ca\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086096 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-serving-cert\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086114 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-serving-cert\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086135 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-config\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086152 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-client-ca\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086169 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-client-ca\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086188 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086207 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt49h\" (UniqueName: \"kubernetes.io/projected/5b5551b0-f51c-43c9-bf60-e191132339fe-kube-api-access-lt49h\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086228 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31897db9-9dd1-42a9-8eae-b5e13e113a3c-serving-cert\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086246 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/814885ca-d12b-49a3-a788-4648517a1c23-node-pullsecrets\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086263 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw2qt\" (UniqueName: \"kubernetes.io/projected/814885ca-d12b-49a3-a788-4648517a1c23-kube-api-access-cw2qt\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086285 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-config\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086304 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxrg7\" (UniqueName: \"kubernetes.io/projected/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-kube-api-access-mxrg7\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086325 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086343 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-service-ca-bundle\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086365 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086383 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-encryption-config\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.091288 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.091422 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.091487 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.091711 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.092164 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.094361 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.094470 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.094581 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.094676 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.094769 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.094896 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.094943 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.095016 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.095111 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.095554 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.095642 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.095720 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.095805 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.095925 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.096060 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.096191 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.096317 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.097474 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.097803 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.097884 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.097959 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098057 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098154 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098257 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098359 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098445 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098642 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098729 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098940 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098980 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.099030 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.099097 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.099142 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.099208 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.099265 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.123362 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7trvn"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.159878 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.162789 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fd88w"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.163195 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-rmc6n"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.163490 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.163738 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.164757 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.173458 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.179182 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-z8tqh"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.179826 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.180222 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.181085 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.182368 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.184635 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.185574 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187496 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-console-config\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187562 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187591 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-serving-cert\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187646 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6shg\" (UniqueName: \"kubernetes.io/projected/31897db9-9dd1-42a9-8eae-b5e13e113a3c-kube-api-access-m6shg\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187672 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187719 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187749 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/814885ca-d12b-49a3-a788-4648517a1c23-audit-dir\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187887 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187922 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-images\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188213 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-dir\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188248 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188381 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-config\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188412 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-etcd-serving-ca\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188546 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-serving-cert\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188571 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvl5k\" (UniqueName: \"kubernetes.io/projected/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-kube-api-access-qvl5k\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188705 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tl5t\" (UniqueName: \"kubernetes.io/projected/df190322-1e43-4ae4-ac74-78702c913801-kube-api-access-8tl5t\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188735 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-serving-cert\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188869 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-config\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188896 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-client-ca\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189655 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/20e5c3b1-91ce-4cbe-866b-745cb58e8c5d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-knqhk\" (UID: \"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189803 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f452c53b-893b-4060-b573-595e98576792-serving-cert\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189841 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189978 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt49h\" (UniqueName: \"kubernetes.io/projected/5b5551b0-f51c-43c9-bf60-e191132339fe-kube-api-access-lt49h\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190002 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-client-ca\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190144 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31897db9-9dd1-42a9-8eae-b5e13e113a3c-serving-cert\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190169 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/814885ca-d12b-49a3-a788-4648517a1c23-node-pullsecrets\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190296 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw2qt\" (UniqueName: \"kubernetes.io/projected/814885ca-d12b-49a3-a788-4648517a1c23-kube-api-access-cw2qt\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190331 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190466 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-oauth-serving-cert\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190496 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190638 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190777 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-config\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190810 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvfb2\" (UniqueName: \"kubernetes.io/projected/20e5c3b1-91ce-4cbe-866b-745cb58e8c5d-kube-api-access-wvfb2\") pod \"cluster-samples-operator-665b6dd947-knqhk\" (UID: \"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190966 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxrg7\" (UniqueName: \"kubernetes.io/projected/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-kube-api-access-mxrg7\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190994 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191127 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-service-ca\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191151 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2db7\" (UniqueName: \"kubernetes.io/projected/e376d872-d6db-4f3b-b9f0-9fff22f7546d-kube-api-access-q2db7\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191279 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-service-ca-bundle\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191312 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191364 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-config\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191470 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-encryption-config\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191537 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e376d872-d6db-4f3b-b9f0-9fff22f7546d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188449 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/814885ca-d12b-49a3-a788-4648517a1c23-audit-dir\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191590 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-config\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191740 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191766 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-policies\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191924 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-images\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191980 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192245 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg4sc\" (UniqueName: \"kubernetes.io/projected/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-kube-api-access-gg4sc\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192274 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-etcd-client\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192297 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192322 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-oauth-config\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192344 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-trusted-ca-bundle\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192366 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e376d872-d6db-4f3b-b9f0-9fff22f7546d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192392 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-config\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192435 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192459 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxgp4\" (UniqueName: \"kubernetes.io/projected/f452c53b-893b-4060-b573-595e98576792-kube-api-access-hxgp4\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192489 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b5551b0-f51c-43c9-bf60-e191132339fe-serving-cert\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192511 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f452c53b-893b-4060-b573-595e98576792-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192543 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-audit\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192912 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-image-import-ca\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192112 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-etcd-serving-ca\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.194031 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-image-import-ca\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.195337 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-config\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.195864 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-audit\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188759 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.197069 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-client-ca\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.197152 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/814885ca-d12b-49a3-a788-4648517a1c23-node-pullsecrets\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.197987 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-config\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.198029 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-service-ca-bundle\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.198640 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hrqb6"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.198710 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.199561 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.200060 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.200424 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.200474 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-config\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.201751 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189621 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189656 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189686 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189712 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189780 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189804 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189833 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190214 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190245 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190271 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190294 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190322 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190346 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190371 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190396 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190428 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190455 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190481 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190509 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190547 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.206393 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190579 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190620 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190654 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190724 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190748 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191212 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191212 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191263 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.207200 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191312 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191331 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191472 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.207466 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191484 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.236569 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b5551b0-f51c-43c9-bf60-e191132339fe-serving-cert\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.236992 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-serving-cert\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.238275 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-serving-cert\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.245853 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31897db9-9dd1-42a9-8eae-b5e13e113a3c-serving-cert\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.246788 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.247315 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-etcd-client\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.247833 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-client-ca\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.249673 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-config\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.250661 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.252203 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.257287 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-encryption-config\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.257841 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.258458 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191536 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191610 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191653 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191655 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191717 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191756 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191775 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191820 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192413 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.193126 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.195250 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.270512 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9kt48"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.273001 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.276441 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z9hfd"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.293324 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.296062 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.310947 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312517 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvl5k\" (UniqueName: \"kubernetes.io/projected/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-kube-api-access-qvl5k\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312568 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tl5t\" (UniqueName: \"kubernetes.io/projected/df190322-1e43-4ae4-ac74-78702c913801-kube-api-access-8tl5t\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312609 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/20e5c3b1-91ce-4cbe-866b-745cb58e8c5d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-knqhk\" (UID: \"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312654 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f452c53b-893b-4060-b573-595e98576792-serving-cert\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312710 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312742 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-oauth-serving-cert\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312774 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312810 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312849 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvfb2\" (UniqueName: \"kubernetes.io/projected/20e5c3b1-91ce-4cbe-866b-745cb58e8c5d-kube-api-access-wvfb2\") pod \"cluster-samples-operator-665b6dd947-knqhk\" (UID: \"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312893 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-service-ca\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312928 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2db7\" (UniqueName: \"kubernetes.io/projected/e376d872-d6db-4f3b-b9f0-9fff22f7546d-kube-api-access-q2db7\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312966 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e376d872-d6db-4f3b-b9f0-9fff22f7546d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313001 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-policies\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313060 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313110 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313148 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-oauth-config\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313184 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-trusted-ca-bundle\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313227 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e376d872-d6db-4f3b-b9f0-9fff22f7546d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313262 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313295 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxgp4\" (UniqueName: \"kubernetes.io/projected/f452c53b-893b-4060-b573-595e98576792-kube-api-access-hxgp4\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313327 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f452c53b-893b-4060-b573-595e98576792-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313359 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-console-config\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313389 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313418 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-serving-cert\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313462 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313492 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313536 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313579 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-dir\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313609 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.314020 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.314128 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.314715 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7mbqg"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.314754 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f452c53b-893b-4060-b573-595e98576792-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.315157 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.317054 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.317834 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.317923 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.317957 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.318225 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.318354 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.318627 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.321148 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.321893 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.322614 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e376d872-d6db-4f3b-b9f0-9fff22f7546d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.323943 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.325106 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.325352 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-trusted-ca-bundle\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.325569 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.328697 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-oauth-config\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.326455 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.326466 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-console-config\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.328855 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.328860 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.328921 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-dir\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.329052 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/20e5c3b1-91ce-4cbe-866b-745cb58e8c5d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-knqhk\" (UID: \"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.325703 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.330120 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.330380 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.330428 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-service-ca\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.331623 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-policies\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.333952 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-oauth-serving-cert\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.334451 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.334627 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-serving-cert\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.334951 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.335777 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.335791 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.336138 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.336888 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.337591 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.338239 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.339026 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.339739 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.340343 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hlm86"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.340823 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.341026 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.341219 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e376d872-d6db-4f3b-b9f0-9fff22f7546d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.341681 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.342490 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.345101 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.346204 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q592c"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.346722 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.347628 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rxgh6"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.347806 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.351546 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.353325 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-ggszh"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.353664 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f452c53b-893b-4060-b573-595e98576792-serving-cert\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.354403 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kljqw"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.354534 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.359718 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7trvn"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.364064 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.364779 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.366136 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.369948 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.371203 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.372684 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z9hfd"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.375222 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.376822 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ssdl5"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.378783 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7h2zc"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.381659 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fd88w"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.381799 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.382212 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kdljh"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.382942 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.385177 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.387129 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8zkrv"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.388758 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-z8tqh"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.390647 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.390919 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.392893 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.394808 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.397389 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.399337 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.401009 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.402541 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.404147 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7h2zc"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.412752 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.425117 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.427648 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zp6d8"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.429057 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.431424 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.432356 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7mbqg"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.434369 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.436061 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.438424 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hlm86"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.438462 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.439391 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.440684 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.441842 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.442877 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-sgrqx"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.444779 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.445282 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ggszh"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.446849 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-sgrqx"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.451379 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.471123 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.490935 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.510737 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.531636 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.572117 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6shg\" (UniqueName: \"kubernetes.io/projected/31897db9-9dd1-42a9-8eae-b5e13e113a3c-kube-api-access-m6shg\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.590274 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg4sc\" (UniqueName: \"kubernetes.io/projected/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-kube-api-access-gg4sc\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.609460 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw2qt\" (UniqueName: \"kubernetes.io/projected/814885ca-d12b-49a3-a788-4648517a1c23-kube-api-access-cw2qt\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.623772 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.631728 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxrg7\" (UniqueName: \"kubernetes.io/projected/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-kube-api-access-mxrg7\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.633733 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.652442 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.663366 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.671805 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.694599 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.718736 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.733292 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.734980 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.758948 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.762811 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.771237 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.791087 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.811933 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.834283 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.852626 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.871901 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.889819 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hrqb6"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.893593 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 08:32:24 crc kubenswrapper[4758]: W0130 08:32:24.901509 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod814885ca_d12b_49a3_a788_4648517a1c23.slice/crio-cf05fefe3e528ed06d6e0104e100fd57f9d3ccdb450ae41ca199ebf454ded551 WatchSource:0}: Error finding container cf05fefe3e528ed06d6e0104e100fd57f9d3ccdb450ae41ca199ebf454ded551: Status 404 returned error can't find the container with id cf05fefe3e528ed06d6e0104e100fd57f9d3ccdb450ae41ca199ebf454ded551 Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.911828 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.952552 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt49h\" (UniqueName: \"kubernetes.io/projected/5b5551b0-f51c-43c9-bf60-e191132339fe-kube-api-access-lt49h\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.991892 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.011438 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024596 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-audit-policies\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024632 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-bound-sa-token\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024653 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024672 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/755faa64-0182-4450-bd27-cb87446008d8-trusted-ca\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024708 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/467f100f-83e4-43b0-bcf0-16cfe7cb0393-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024722 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024747 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kngv7\" (UniqueName: \"kubernetes.io/projected/becac525-d7ec-48b6-9f52-3b7ca1606e50-kube-api-access-kngv7\") pod \"migrator-59844c95c7-57j8p\" (UID: \"becac525-d7ec-48b6-9f52-3b7ca1606e50\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024765 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-certificates\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024778 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-trusted-ca\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024794 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmxtx\" (UniqueName: \"kubernetes.io/projected/d03a1e8b-8151-4fb9-8a25-56e567566244-kube-api-access-lmxtx\") pod \"dns-operator-744455d44c-7trvn\" (UID: \"d03a1e8b-8151-4fb9-8a25-56e567566244\") " pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024809 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s94b7\" (UniqueName: \"kubernetes.io/projected/359e47a9-5633-496e-9522-d7c522c674bf-kube-api-access-s94b7\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024843 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4ggx\" (UniqueName: \"kubernetes.io/projected/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-kube-api-access-h4ggx\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024860 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-tls\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024874 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-etcd-client\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024915 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024930 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/755faa64-0182-4450-bd27-cb87446008d8-serving-cert\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024948 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-default-certificate\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024965 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d03a1e8b-8151-4fb9-8a25-56e567566244-metrics-tls\") pod \"dns-operator-744455d44c-7trvn\" (UID: \"d03a1e8b-8151-4fb9-8a25-56e567566244\") " pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024981 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-stats-auth\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025031 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf779\" (UniqueName: \"kubernetes.io/projected/b0313d23-ff69-4957-ad3b-d6adc246aad5-kube-api-access-kf779\") pod \"downloads-7954f5f757-zp6d8\" (UID: \"b0313d23-ff69-4957-ad3b-d6adc246aad5\") " pod="openshift-console/downloads-7954f5f757-zp6d8" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025062 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/359e47a9-5633-496e-9522-d7c522c674bf-audit-dir\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025094 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-service-ca-bundle\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025128 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/467f100f-83e4-43b0-bcf0-16cfe7cb0393-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025144 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn5vx\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-kube-api-access-pn5vx\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025158 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755faa64-0182-4450-bd27-cb87446008d8-config\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025181 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-serving-cert\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025198 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q92t4\" (UniqueName: \"kubernetes.io/projected/755faa64-0182-4450-bd27-cb87446008d8-kube-api-access-q92t4\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025212 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-encryption-config\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025227 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-metrics-certs\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.026495 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:25.526482102 +0000 UTC m=+150.498793653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.032315 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.052310 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.073607 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.087104 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.091391 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.113581 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127301 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.127503 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:25.627474471 +0000 UTC m=+150.599786022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127567 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-csi-data-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127613 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2950f4ab-791b-4190-9455-14e34e95f22d-signing-cabundle\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127653 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127698 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc7f2\" (UniqueName: \"kubernetes.io/projected/25e6f6dd-4791-48ca-a614-928eb2fd6886-kube-api-access-rc7f2\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127730 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-encryption-config\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127751 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9313ed67-0218-4d32-adf7-710ba67de622-secret-volume\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127788 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6e70d9dc-9e4c-45a3-b8d3-046067b91297-profile-collector-cert\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127815 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6dv4\" (UniqueName: \"kubernetes.io/projected/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-kube-api-access-z6dv4\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127839 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-config\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128227 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128294 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128335 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-audit-policies\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128359 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bstw8\" (UniqueName: \"kubernetes.io/projected/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-kube-api-access-bstw8\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128415 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/102f0f42-f8c6-4e98-9e96-1659a0a62c50-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128562 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128638 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-bound-sa-token\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128677 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-tmpfs\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128703 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-serving-cert\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128730 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/467f100f-83e4-43b0-bcf0-16cfe7cb0393-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128756 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128819 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msbbw\" (UniqueName: \"kubernetes.io/projected/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-kube-api-access-msbbw\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128898 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kngv7\" (UniqueName: \"kubernetes.io/projected/becac525-d7ec-48b6-9f52-3b7ca1606e50-kube-api-access-kngv7\") pod \"migrator-59844c95c7-57j8p\" (UID: \"becac525-d7ec-48b6-9f52-3b7ca1606e50\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129040 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-audit-policies\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129354 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129523 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc7b7\" (UniqueName: \"kubernetes.io/projected/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-kube-api-access-rc7b7\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129571 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0d98b4c-0d9c-4a9a-af05-1def738f8293-config\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129659 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-srv-cert\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129692 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/88956ba5-c91f-435b-94fa-d639c87311f3-node-bootstrap-token\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129721 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129745 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129788 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-trusted-ca\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129839 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s94b7\" (UniqueName: \"kubernetes.io/projected/359e47a9-5633-496e-9522-d7c522c674bf-kube-api-access-s94b7\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129900 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129929 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj26n\" (UniqueName: \"kubernetes.io/projected/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-kube-api-access-jj26n\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129972 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129998 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.130042 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-tls\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.130084 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67466d94-68c1-4700-aec7-f2dd533b2fd6-config\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.130111 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.130134 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9313ed67-0218-4d32-adf7-710ba67de622-config-volume\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.131360 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.133052 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-trusted-ca\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.133136 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8kdk\" (UniqueName: \"kubernetes.io/projected/6e70d9dc-9e4c-45a3-b8d3-046067b91297-kube-api-access-m8kdk\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.133185 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.133210 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-metrics-tls\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.133293 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.133345 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/755faa64-0182-4450-bd27-cb87446008d8-serving-cert\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.133374 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d03a1e8b-8151-4fb9-8a25-56e567566244-metrics-tls\") pod \"dns-operator-744455d44c-7trvn\" (UID: \"d03a1e8b-8151-4fb9-8a25-56e567566244\") " pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.133957 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:25.633936864 +0000 UTC m=+150.606248415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.134583 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-encryption-config\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.135643 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/467f100f-83e4-43b0-bcf0-16cfe7cb0393-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.135723 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqv4r\" (UniqueName: \"kubernetes.io/projected/3934d9aa-0054-4ed3-a2e8-a57dc60dad77-kube-api-access-wqv4r\") pod \"control-plane-machine-set-operator-78cbb6b69f-hd9qf\" (UID: \"3934d9aa-0054-4ed3-a2e8-a57dc60dad77\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.138881 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/755faa64-0182-4450-bd27-cb87446008d8-serving-cert\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.139636 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-apiservice-cert\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.139396 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d03a1e8b-8151-4fb9-8a25-56e567566244-metrics-tls\") pod \"dns-operator-744455d44c-7trvn\" (UID: \"d03a1e8b-8151-4fb9-8a25-56e567566244\") " pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.139768 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.139822 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-webhook-cert\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.139852 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67466d94-68c1-4700-aec7-f2dd533b2fd6-auth-proxy-config\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.139883 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/102f0f42-f8c6-4e98-9e96-1659a0a62c50-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.139906 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7866g\" (UniqueName: \"kubernetes.io/projected/67466d94-68c1-4700-aec7-f2dd533b2fd6-kube-api-access-7866g\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140026 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-plugins-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140072 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-proxy-tls\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140109 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/359e47a9-5633-496e-9522-d7c522c674bf-audit-dir\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140168 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-ca\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140186 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/359e47a9-5633-496e-9522-d7c522c674bf-audit-dir\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140199 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/88956ba5-c91f-435b-94fa-d639c87311f3-certs\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140220 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whl5r\" (UniqueName: \"kubernetes.io/projected/125e1dd5-1556-4334-86d4-3c45fa9e833d-kube-api-access-whl5r\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140298 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/467f100f-83e4-43b0-bcf0-16cfe7cb0393-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140329 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn5vx\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-kube-api-access-pn5vx\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140366 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755faa64-0182-4450-bd27-cb87446008d8-config\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140404 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-tls\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140856 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-serving-cert\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140888 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-mountpoint-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140913 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25e6f6dd-4791-48ca-a614-928eb2fd6886-config\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140941 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q92t4\" (UniqueName: \"kubernetes.io/projected/755faa64-0182-4450-bd27-cb87446008d8-kube-api-access-q92t4\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140971 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-metrics-certs\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140995 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125e1dd5-1556-4334-86d4-3c45fa9e833d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141020 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6e70d9dc-9e4c-45a3-b8d3-046067b91297-srv-cert\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141038 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/467f100f-83e4-43b0-bcf0-16cfe7cb0393-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141111 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a06bcc27-063e-4acc-942d-78594f88fd2c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z9hfd\" (UID: \"a06bcc27-063e-4acc-942d-78594f88fd2c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141142 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125e1dd5-1556-4334-86d4-3c45fa9e833d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141164 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdlpm\" (UniqueName: \"kubernetes.io/projected/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-kube-api-access-jdlpm\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141187 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhtqg\" (UniqueName: \"kubernetes.io/projected/99426938-7a55-4e2a-8ded-c683fe91d54d-kube-api-access-bhtqg\") pod \"ingress-canary-sgrqx\" (UID: \"99426938-7a55-4e2a-8ded-c683fe91d54d\") " pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141216 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-proxy-tls\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141281 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-socket-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141304 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-registration-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141333 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/755faa64-0182-4450-bd27-cb87446008d8-trusted-ca\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141355 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fb1701d-491e-479d-a12b-5af9e40e2be5-config-volume\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141379 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141404 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj54p\" (UniqueName: \"kubernetes.io/projected/2950f4ab-791b-4190-9455-14e34e95f22d-kube-api-access-mj54p\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141434 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wljvs\" (UniqueName: \"kubernetes.io/projected/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-kube-api-access-wljvs\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141462 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-certificates\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141485 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmxtx\" (UniqueName: \"kubernetes.io/projected/d03a1e8b-8151-4fb9-8a25-56e567566244-kube-api-access-lmxtx\") pod \"dns-operator-744455d44c-7trvn\" (UID: \"d03a1e8b-8151-4fb9-8a25-56e567566244\") " pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141493 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755faa64-0182-4450-bd27-cb87446008d8-config\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141506 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-client\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141561 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gnr9\" (UniqueName: \"kubernetes.io/projected/2fb1701d-491e-479d-a12b-5af9e40e2be5-kube-api-access-9gnr9\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141581 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-service-ca\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141620 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4ggx\" (UniqueName: \"kubernetes.io/projected/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-kube-api-access-h4ggx\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141645 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-etcd-client\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141692 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n6rk\" (UniqueName: \"kubernetes.io/projected/a06bcc27-063e-4acc-942d-78594f88fd2c-kube-api-access-5n6rk\") pod \"multus-admission-controller-857f4d67dd-z9hfd\" (UID: \"a06bcc27-063e-4acc-942d-78594f88fd2c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141718 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmn8l\" (UniqueName: \"kubernetes.io/projected/88956ba5-c91f-435b-94fa-d639c87311f3-kube-api-access-nmn8l\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141741 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-config\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141766 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-default-certificate\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141788 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvghl\" (UniqueName: \"kubernetes.io/projected/edeaef2c-0b5f-4448-a890-764774c8ff03-kube-api-access-qvghl\") pod \"package-server-manager-789f6589d5-hgtxv\" (UID: \"edeaef2c-0b5f-4448-a890-764774c8ff03\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141839 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141864 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2950f4ab-791b-4190-9455-14e34e95f22d-signing-key\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141886 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0d98b4c-0d9c-4a9a-af05-1def738f8293-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141908 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/edeaef2c-0b5f-4448-a890-764774c8ff03-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hgtxv\" (UID: \"edeaef2c-0b5f-4448-a890-764774c8ff03\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141930 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbdrv\" (UniqueName: \"kubernetes.io/projected/24c14c8a-2e57-452b-b70b-646c1e2bac06-kube-api-access-cbdrv\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141953 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-stats-auth\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141994 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97djv\" (UniqueName: \"kubernetes.io/projected/49a66643-dc8c-4c84-9345-7f98676dc1d3-kube-api-access-97djv\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142021 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25e6f6dd-4791-48ca-a614-928eb2fd6886-serving-cert\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142054 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf779\" (UniqueName: \"kubernetes.io/projected/b0313d23-ff69-4957-ad3b-d6adc246aad5-kube-api-access-kf779\") pod \"downloads-7954f5f757-zp6d8\" (UID: \"b0313d23-ff69-4957-ad3b-d6adc246aad5\") " pod="openshift-console/downloads-7954f5f757-zp6d8" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142098 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5pbl\" (UniqueName: \"kubernetes.io/projected/102f0f42-f8c6-4e98-9e96-1659a0a62c50-kube-api-access-c5pbl\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142152 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3934d9aa-0054-4ed3-a2e8-a57dc60dad77-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hd9qf\" (UID: \"3934d9aa-0054-4ed3-a2e8-a57dc60dad77\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142258 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-images\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142332 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/102f0f42-f8c6-4e98-9e96-1659a0a62c50-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142361 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-service-ca-bundle\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142410 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99426938-7a55-4e2a-8ded-c683fe91d54d-cert\") pod \"ingress-canary-sgrqx\" (UID: \"99426938-7a55-4e2a-8ded-c683fe91d54d\") " pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142433 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2fb1701d-491e-479d-a12b-5af9e40e2be5-metrics-tls\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142456 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-trusted-ca\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142507 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0d98b4c-0d9c-4a9a-af05-1def738f8293-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142542 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/67466d94-68c1-4700-aec7-f2dd533b2fd6-machine-approver-tls\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142567 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8frm7\" (UniqueName: \"kubernetes.io/projected/9313ed67-0218-4d32-adf7-710ba67de622-kube-api-access-8frm7\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.143720 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/755faa64-0182-4450-bd27-cb87446008d8-trusted-ca\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.144294 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.147581 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-serving-cert\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.148646 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-metrics-certs\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.150517 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-certificates\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.150894 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-stats-auth\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.153548 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-service-ca-bundle\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.207881 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-default-certificate\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.208245 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-etcd-client\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.212135 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.212613 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.212827 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.220362 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.224791 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9kt48"] Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.232885 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.246845 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.247587 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:25.747550429 +0000 UTC m=+150.719861980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250517 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a06bcc27-063e-4acc-942d-78594f88fd2c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z9hfd\" (UID: \"a06bcc27-063e-4acc-942d-78594f88fd2c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250600 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhtqg\" (UniqueName: \"kubernetes.io/projected/99426938-7a55-4e2a-8ded-c683fe91d54d-kube-api-access-bhtqg\") pod \"ingress-canary-sgrqx\" (UID: \"99426938-7a55-4e2a-8ded-c683fe91d54d\") " pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250620 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-proxy-tls\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250635 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125e1dd5-1556-4334-86d4-3c45fa9e833d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250660 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdlpm\" (UniqueName: \"kubernetes.io/projected/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-kube-api-access-jdlpm\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250682 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-socket-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250696 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-registration-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250727 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fb1701d-491e-479d-a12b-5af9e40e2be5-config-volume\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250750 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj54p\" (UniqueName: \"kubernetes.io/projected/2950f4ab-791b-4190-9455-14e34e95f22d-kube-api-access-mj54p\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250770 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wljvs\" (UniqueName: \"kubernetes.io/projected/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-kube-api-access-wljvs\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250802 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-client\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250840 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gnr9\" (UniqueName: \"kubernetes.io/projected/2fb1701d-491e-479d-a12b-5af9e40e2be5-kube-api-access-9gnr9\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250856 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-service-ca\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250896 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n6rk\" (UniqueName: \"kubernetes.io/projected/a06bcc27-063e-4acc-942d-78594f88fd2c-kube-api-access-5n6rk\") pod \"multus-admission-controller-857f4d67dd-z9hfd\" (UID: \"a06bcc27-063e-4acc-942d-78594f88fd2c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250913 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmn8l\" (UniqueName: \"kubernetes.io/projected/88956ba5-c91f-435b-94fa-d639c87311f3-kube-api-access-nmn8l\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250930 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-config\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250949 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvghl\" (UniqueName: \"kubernetes.io/projected/edeaef2c-0b5f-4448-a890-764774c8ff03-kube-api-access-qvghl\") pod \"package-server-manager-789f6589d5-hgtxv\" (UID: \"edeaef2c-0b5f-4448-a890-764774c8ff03\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250976 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250993 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/edeaef2c-0b5f-4448-a890-764774c8ff03-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hgtxv\" (UID: \"edeaef2c-0b5f-4448-a890-764774c8ff03\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251012 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbdrv\" (UniqueName: \"kubernetes.io/projected/24c14c8a-2e57-452b-b70b-646c1e2bac06-kube-api-access-cbdrv\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251028 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2950f4ab-791b-4190-9455-14e34e95f22d-signing-key\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251051 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0d98b4c-0d9c-4a9a-af05-1def738f8293-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251138 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97djv\" (UniqueName: \"kubernetes.io/projected/49a66643-dc8c-4c84-9345-7f98676dc1d3-kube-api-access-97djv\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251160 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25e6f6dd-4791-48ca-a614-928eb2fd6886-serving-cert\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251183 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5pbl\" (UniqueName: \"kubernetes.io/projected/102f0f42-f8c6-4e98-9e96-1659a0a62c50-kube-api-access-c5pbl\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251200 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3934d9aa-0054-4ed3-a2e8-a57dc60dad77-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hd9qf\" (UID: \"3934d9aa-0054-4ed3-a2e8-a57dc60dad77\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251218 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-images\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251242 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/102f0f42-f8c6-4e98-9e96-1659a0a62c50-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251260 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99426938-7a55-4e2a-8ded-c683fe91d54d-cert\") pod \"ingress-canary-sgrqx\" (UID: \"99426938-7a55-4e2a-8ded-c683fe91d54d\") " pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251275 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2fb1701d-491e-479d-a12b-5af9e40e2be5-metrics-tls\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251290 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-trusted-ca\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251305 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0d98b4c-0d9c-4a9a-af05-1def738f8293-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251343 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/67466d94-68c1-4700-aec7-f2dd533b2fd6-machine-approver-tls\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251360 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8frm7\" (UniqueName: \"kubernetes.io/projected/9313ed67-0218-4d32-adf7-710ba67de622-kube-api-access-8frm7\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251379 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251394 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-csi-data-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251409 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2950f4ab-791b-4190-9455-14e34e95f22d-signing-cabundle\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251427 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc7f2\" (UniqueName: \"kubernetes.io/projected/25e6f6dd-4791-48ca-a614-928eb2fd6886-kube-api-access-rc7f2\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251447 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9313ed67-0218-4d32-adf7-710ba67de622-secret-volume\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251462 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6e70d9dc-9e4c-45a3-b8d3-046067b91297-profile-collector-cert\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251481 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6dv4\" (UniqueName: \"kubernetes.io/projected/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-kube-api-access-z6dv4\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251497 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-config\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251516 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251533 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bstw8\" (UniqueName: \"kubernetes.io/projected/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-kube-api-access-bstw8\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251550 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251569 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/102f0f42-f8c6-4e98-9e96-1659a0a62c50-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251584 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251606 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-tmpfs\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251621 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-serving-cert\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251645 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msbbw\" (UniqueName: \"kubernetes.io/projected/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-kube-api-access-msbbw\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251671 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc7b7\" (UniqueName: \"kubernetes.io/projected/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-kube-api-access-rc7b7\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251693 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0d98b4c-0d9c-4a9a-af05-1def738f8293-config\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251715 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-srv-cert\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251737 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/88956ba5-c91f-435b-94fa-d639c87311f3-node-bootstrap-token\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251758 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251778 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251812 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251839 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj26n\" (UniqueName: \"kubernetes.io/projected/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-kube-api-access-jj26n\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251862 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251880 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251904 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67466d94-68c1-4700-aec7-f2dd533b2fd6-config\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251927 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251949 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9313ed67-0218-4d32-adf7-710ba67de622-config-volume\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251972 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8kdk\" (UniqueName: \"kubernetes.io/projected/6e70d9dc-9e4c-45a3-b8d3-046067b91297-kube-api-access-m8kdk\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251999 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252023 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252050 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-metrics-tls\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252095 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqv4r\" (UniqueName: \"kubernetes.io/projected/3934d9aa-0054-4ed3-a2e8-a57dc60dad77-kube-api-access-wqv4r\") pod \"control-plane-machine-set-operator-78cbb6b69f-hd9qf\" (UID: \"3934d9aa-0054-4ed3-a2e8-a57dc60dad77\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252118 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-apiservice-cert\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252144 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252167 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-webhook-cert\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252188 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67466d94-68c1-4700-aec7-f2dd533b2fd6-auth-proxy-config\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252213 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/102f0f42-f8c6-4e98-9e96-1659a0a62c50-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252241 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7866g\" (UniqueName: \"kubernetes.io/projected/67466d94-68c1-4700-aec7-f2dd533b2fd6-kube-api-access-7866g\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252266 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-plugins-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252288 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-proxy-tls\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252312 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-ca\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252341 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/88956ba5-c91f-435b-94fa-d639c87311f3-certs\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252365 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whl5r\" (UniqueName: \"kubernetes.io/projected/125e1dd5-1556-4334-86d4-3c45fa9e833d-kube-api-access-whl5r\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252406 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-mountpoint-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252428 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25e6f6dd-4791-48ca-a614-928eb2fd6886-config\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252462 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125e1dd5-1556-4334-86d4-3c45fa9e833d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252486 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6e70d9dc-9e4c-45a3-b8d3-046067b91297-srv-cert\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252827 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-socket-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252887 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-registration-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.253683 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.256289 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125e1dd5-1556-4334-86d4-3c45fa9e833d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.256722 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q592c"] Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.256758 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7"] Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.256785 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a06bcc27-063e-4acc-942d-78594f88fd2c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z9hfd\" (UID: \"a06bcc27-063e-4acc-942d-78594f88fd2c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.257092 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-csi-data-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.257537 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-config\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.258202 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.258530 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.258645 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-proxy-tls\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.258707 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-plugins-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.258920 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/102f0f42-f8c6-4e98-9e96-1659a0a62c50-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.258980 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.260217 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-tmpfs\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.260366 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-images\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.260420 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-mountpoint-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.260921 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-ca\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.262579 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:25.762538811 +0000 UTC m=+150.734850522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.264398 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-service-ca\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.264963 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.267524 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3934d9aa-0054-4ed3-a2e8-a57dc60dad77-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hd9qf\" (UID: \"3934d9aa-0054-4ed3-a2e8-a57dc60dad77\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.267650 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-metrics-tls\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.268802 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.269164 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-client\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.269190 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0d98b4c-0d9c-4a9a-af05-1def738f8293-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.271879 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-trusted-ca\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.272810 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-proxy-tls\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.274267 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-serving-cert\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.275613 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/102f0f42-f8c6-4e98-9e96-1659a0a62c50-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.281893 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.282945 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125e1dd5-1556-4334-86d4-3c45fa9e833d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.286777 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0d98b4c-0d9c-4a9a-af05-1def738f8293-config\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.287301 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvl5k\" (UniqueName: \"kubernetes.io/projected/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-kube-api-access-qvl5k\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.313209 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.321647 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tl5t\" (UniqueName: \"kubernetes.io/projected/df190322-1e43-4ae4-ac74-78702c913801-kube-api-access-8tl5t\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.329702 4758 request.go:700] Waited for 1.000433144s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.352105 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.353530 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.354015 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:25.853989589 +0000 UTC m=+150.826301140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.355757 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvfb2\" (UniqueName: \"kubernetes.io/projected/20e5c3b1-91ce-4cbe-866b-745cb58e8c5d-kube-api-access-wvfb2\") pod \"cluster-samples-operator-665b6dd947-knqhk\" (UID: \"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.375106 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.395793 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.397817 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6e70d9dc-9e4c-45a3-b8d3-046067b91297-srv-cert\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.415991 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.416436 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.421819 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6e70d9dc-9e4c-45a3-b8d3-046067b91297-profile-collector-cert\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.422244 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9313ed67-0218-4d32-adf7-710ba67de622-secret-volume\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.432464 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.441598 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-srv-cert\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.453585 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.455757 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.456555 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:25.956542437 +0000 UTC m=+150.928853988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.464623 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.468729 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/edeaef2c-0b5f-4448-a890-764774c8ff03-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hgtxv\" (UID: \"edeaef2c-0b5f-4448-a890-764774c8ff03\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.493471 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" event={"ID":"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972","Type":"ContainerStarted","Data":"e491d6716764c77f2cf3d0b73751d42e1951d7ac67b09a1074a34fd126b99ca7"} Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.493562 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" event={"ID":"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972","Type":"ContainerStarted","Data":"9eb218aee65f6ce0d0843d2141ba8677df46d2e8f76d79c266edaaae80aa218f"} Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.497899 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxgp4\" (UniqueName: \"kubernetes.io/projected/f452c53b-893b-4060-b573-595e98576792-kube-api-access-hxgp4\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.510471 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" event={"ID":"31897db9-9dd1-42a9-8eae-b5e13e113a3c","Type":"ContainerStarted","Data":"f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83"} Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.510711 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" event={"ID":"31897db9-9dd1-42a9-8eae-b5e13e113a3c","Type":"ContainerStarted","Data":"c8e9069e7ec2c171424e1bd93a4b8f9855a2158a2e3b30930d28fb757cb65678"} Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.511154 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.511987 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2db7\" (UniqueName: \"kubernetes.io/projected/e376d872-d6db-4f3b-b9f0-9fff22f7546d-kube-api-access-q2db7\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.512129 4758 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-n7zk7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.512582 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" podUID="31897db9-9dd1-42a9-8eae-b5e13e113a3c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.512482 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.515316 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" event={"ID":"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a","Type":"ContainerStarted","Data":"483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d"} Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.515377 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" event={"ID":"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a","Type":"ContainerStarted","Data":"1a4096fd26f7cd35b668bd3b74bb0a13df67390a721f7d3ce8bb74a60052f47a"} Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.516798 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.518147 4758 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9kt48 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.518188 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" podUID="deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.518827 4758 generic.go:334] "Generic (PLEG): container finished" podID="814885ca-d12b-49a3-a788-4648517a1c23" containerID="06bed04177d001b476d605fe09f5c6d6e476ffb8f3710f4efae153f1def3cebd" exitCode=0 Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.518859 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" event={"ID":"814885ca-d12b-49a3-a788-4648517a1c23","Type":"ContainerDied","Data":"06bed04177d001b476d605fe09f5c6d6e476ffb8f3710f4efae153f1def3cebd"} Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.518882 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" event={"ID":"814885ca-d12b-49a3-a788-4648517a1c23","Type":"ContainerStarted","Data":"cf05fefe3e528ed06d6e0104e100fd57f9d3ccdb450ae41ca199ebf454ded551"} Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.521981 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-apiservice-cert\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.522018 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-webhook-cert\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.531191 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.541781 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kljqw"] Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.543344 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.552453 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 08:32:25 crc kubenswrapper[4758]: W0130 08:32:25.554863 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b5551b0_f51c_43c9_bf60_e191132339fe.slice/crio-85c0262ef40ab2aeca336771dd3d20ba45b615af8c5c05130272ce1cff973c24 WatchSource:0}: Error finding container 85c0262ef40ab2aeca336771dd3d20ba45b615af8c5c05130272ce1cff973c24: Status 404 returned error can't find the container with id 85c0262ef40ab2aeca336771dd3d20ba45b615af8c5c05130272ce1cff973c24 Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.559473 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.563880 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.564683 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.064665739 +0000 UTC m=+151.036977290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.566212 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.571333 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.583722 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25e6f6dd-4791-48ca-a614-928eb2fd6886-serving-cert\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.587967 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.591828 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.603940 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25e6f6dd-4791-48ca-a614-928eb2fd6886-config\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.615386 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.641229 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.656069 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.656709 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9313ed67-0218-4d32-adf7-710ba67de622-config-volume\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.666409 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.667741 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.167722162 +0000 UTC m=+151.140033713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.674305 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.680349 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2950f4ab-791b-4190-9455-14e34e95f22d-signing-cabundle\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.698680 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.712356 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.738327 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.743342 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2950f4ab-791b-4190-9455-14e34e95f22d-signing-key\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.762439 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.768576 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.769523 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.269505156 +0000 UTC m=+151.241816697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.772473 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.798726 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.801861 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.805617 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-config\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.816220 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.834488 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.851875 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.870875 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.871395 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.371372452 +0000 UTC m=+151.343684093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.873895 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.891665 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.912664 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.920243 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/67466d94-68c1-4700-aec7-f2dd533b2fd6-machine-approver-tls\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.926000 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67466d94-68c1-4700-aec7-f2dd533b2fd6-auth-proxy-config\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.932005 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.936910 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67466d94-68c1-4700-aec7-f2dd533b2fd6-config\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.942908 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq"] Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.959653 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.960306 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn"] Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.972834 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.973704 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.473666491 +0000 UTC m=+151.445978042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: W0130 08:32:25.998443 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode376d872_d6db_4f3b_b9f0_9fff22f7546d.slice/crio-66c46e2970936047cd492a86855b7dfda0af974a1f7369abcfb92abb63186c7a WatchSource:0}: Error finding container 66c46e2970936047cd492a86855b7dfda0af974a1f7369abcfb92abb63186c7a: Status 404 returned error can't find the container with id 66c46e2970936047cd492a86855b7dfda0af974a1f7369abcfb92abb63186c7a Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:25.998617 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.000910 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.002368 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ssdl5"] Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.011859 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.013665 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fb1701d-491e-479d-a12b-5af9e40e2be5-config-volume\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.013842 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2fb1701d-491e-479d-a12b-5af9e40e2be5-metrics-tls\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.021377 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk"] Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.031879 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.040464 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.052582 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.072161 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.074634 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.075313 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.575298579 +0000 UTC m=+151.547610130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.079841 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.092928 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.115500 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.131939 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.135709 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rxgh6"] Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.152092 4758 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 08:32:26 crc kubenswrapper[4758]: W0130 08:32:26.166442 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf190322_1e43_4ae4_ac74_78702c913801.slice/crio-773264022dc035335d95553995a9e9c2d29eb6d0d5d52eedc14a38ba389878c4 WatchSource:0}: Error finding container 773264022dc035335d95553995a9e9c2d29eb6d0d5d52eedc14a38ba389878c4: Status 404 returned error can't find the container with id 773264022dc035335d95553995a9e9c2d29eb6d0d5d52eedc14a38ba389878c4 Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.171271 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.175827 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.176003 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.675977968 +0000 UTC m=+151.648289509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.176346 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.176825 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.676812853 +0000 UTC m=+151.649124404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.185331 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/88956ba5-c91f-435b-94fa-d639c87311f3-certs\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.194219 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.212276 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.224178 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/88956ba5-c91f-435b-94fa-d639c87311f3-node-bootstrap-token\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.232151 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.251503 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.259713 4758 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.259771 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99426938-7a55-4e2a-8ded-c683fe91d54d-cert podName:99426938-7a55-4e2a-8ded-c683fe91d54d nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.759753534 +0000 UTC m=+151.732065085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/99426938-7a55-4e2a-8ded-c683fe91d54d-cert") pod "ingress-canary-sgrqx" (UID: "99426938-7a55-4e2a-8ded-c683fe91d54d") : failed to sync secret cache: timed out waiting for the condition Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.271970 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.278035 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.278305 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.778278837 +0000 UTC m=+151.750590388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.278584 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.279024 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.77901663 +0000 UTC m=+151.751328181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.291064 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.350226 4758 request.go:700] Waited for 1.221017686s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa/token Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.350703 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-bound-sa-token\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.368230 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kngv7\" (UniqueName: \"kubernetes.io/projected/becac525-d7ec-48b6-9f52-3b7ca1606e50-kube-api-access-kngv7\") pod \"migrator-59844c95c7-57j8p\" (UID: \"becac525-d7ec-48b6-9f52-3b7ca1606e50\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.380192 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.380366 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.880341829 +0000 UTC m=+151.852653380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.380885 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.381221 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.881207356 +0000 UTC m=+151.853518907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.395683 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s94b7\" (UniqueName: \"kubernetes.io/projected/359e47a9-5633-496e-9522-d7c522c674bf-kube-api-access-s94b7\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.407548 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn5vx\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-kube-api-access-pn5vx\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.414663 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.429095 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q92t4\" (UniqueName: \"kubernetes.io/projected/755faa64-0182-4450-bd27-cb87446008d8-kube-api-access-q92t4\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.445899 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4ggx\" (UniqueName: \"kubernetes.io/projected/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-kube-api-access-h4ggx\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.451587 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.474534 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmxtx\" (UniqueName: \"kubernetes.io/projected/d03a1e8b-8151-4fb9-8a25-56e567566244-kube-api-access-lmxtx\") pod \"dns-operator-744455d44c-7trvn\" (UID: \"d03a1e8b-8151-4fb9-8a25-56e567566244\") " pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.480077 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.482337 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.482470 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.982449872 +0000 UTC m=+151.954761423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.482841 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.483155 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.983139104 +0000 UTC m=+151.955450655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.486864 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf779\" (UniqueName: \"kubernetes.io/projected/b0313d23-ff69-4957-ad3b-d6adc246aad5-kube-api-access-kf779\") pod \"downloads-7954f5f757-zp6d8\" (UID: \"b0313d23-ff69-4957-ad3b-d6adc246aad5\") " pod="openshift-console/downloads-7954f5f757-zp6d8" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.496465 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zp6d8" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.505967 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.509246 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdlpm\" (UniqueName: \"kubernetes.io/projected/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-kube-api-access-jdlpm\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.517203 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.572596 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.574745 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj54p\" (UniqueName: \"kubernetes.io/projected/2950f4ab-791b-4190-9455-14e34e95f22d-kube-api-access-mj54p\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.593643 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhtqg\" (UniqueName: \"kubernetes.io/projected/99426938-7a55-4e2a-8ded-c683fe91d54d-kube-api-access-bhtqg\") pod \"ingress-canary-sgrqx\" (UID: \"99426938-7a55-4e2a-8ded-c683fe91d54d\") " pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.598240 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gnr9\" (UniqueName: \"kubernetes.io/projected/2fb1701d-491e-479d-a12b-5af9e40e2be5-kube-api-access-9gnr9\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.609508 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" event={"ID":"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972","Type":"ContainerStarted","Data":"bb9b8e054de931ab03f7eb90b37e3eb1db1214d52ef2bd3df6ab4f36b3e60abf"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.621788 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.622318 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.122300733 +0000 UTC m=+152.094612284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.622519 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.623143 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n6rk\" (UniqueName: \"kubernetes.io/projected/a06bcc27-063e-4acc-942d-78594f88fd2c-kube-api-access-5n6rk\") pod \"multus-admission-controller-857f4d67dd-z9hfd\" (UID: \"a06bcc27-063e-4acc-942d-78594f88fd2c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.624006 4758 generic.go:334] "Generic (PLEG): container finished" podID="f452c53b-893b-4060-b573-595e98576792" containerID="4b57da8d1eee57235d82410e50918444bac3c4f78786e34aea5a3019fbed97fe" exitCode=0 Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.624096 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" event={"ID":"f452c53b-893b-4060-b573-595e98576792","Type":"ContainerDied","Data":"4b57da8d1eee57235d82410e50918444bac3c4f78786e34aea5a3019fbed97fe"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.624120 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" event={"ID":"f452c53b-893b-4060-b573-595e98576792","Type":"ContainerStarted","Data":"0768556d82c063eab914e0e7af28e9e4bb30fbf6448ca9fd896d5c86ea40dc81"} Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.626619 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.126565668 +0000 UTC m=+152.098877219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.629405 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wljvs\" (UniqueName: \"kubernetes.io/projected/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-kube-api-access-wljvs\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.630277 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmn8l\" (UniqueName: \"kubernetes.io/projected/88956ba5-c91f-435b-94fa-d639c87311f3-kube-api-access-nmn8l\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.630330 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" event={"ID":"814885ca-d12b-49a3-a788-4648517a1c23","Type":"ContainerStarted","Data":"041ec71157587093ffdee96d31c8608d19810d54113c1648ad1f342f54ee09a2"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.630380 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" event={"ID":"814885ca-d12b-49a3-a788-4648517a1c23","Type":"ContainerStarted","Data":"10a9862bfa8fb48ed262dee70ea4cf4783505353e609701fe0d44b8e976559ed"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.639556 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.639933 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" event={"ID":"5b5551b0-f51c-43c9-bf60-e191132339fe","Type":"ContainerStarted","Data":"c20273ebf8c91d2708a07938b196b9ddd89248bb4177298b767d1e8e9a251cc6"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.639958 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" event={"ID":"5b5551b0-f51c-43c9-bf60-e191132339fe","Type":"ContainerStarted","Data":"85c0262ef40ab2aeca336771dd3d20ba45b615af8c5c05130272ce1cff973c24"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.642914 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" event={"ID":"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d","Type":"ContainerStarted","Data":"89e67cc962d4bf89057ae2a579f90a35a4e85329807e53e47f04365cd59a99df"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.642944 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" event={"ID":"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d","Type":"ContainerStarted","Data":"64f1d84a89d35df3dd5e23a02ab891ad0a03756256ffd41ced5be5722cc1bf34"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.642979 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" event={"ID":"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d","Type":"ContainerStarted","Data":"a4dc4d39d49eae5226959e90aa2ddd5a4dacec5f7d79a905e763a710c4671136"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.650560 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rxgh6" event={"ID":"df190322-1e43-4ae4-ac74-78702c913801","Type":"ContainerStarted","Data":"360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.650598 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rxgh6" event={"ID":"df190322-1e43-4ae4-ac74-78702c913801","Type":"ContainerStarted","Data":"773264022dc035335d95553995a9e9c2d29eb6d0d5d52eedc14a38ba389878c4"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.654881 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" event={"ID":"e376d872-d6db-4f3b-b9f0-9fff22f7546d","Type":"ContainerStarted","Data":"80003f63be1d0c524f614428431ba7d52b02e3fe28fb71a2fa2b237957f2f821"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.654912 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" event={"ID":"e376d872-d6db-4f3b-b9f0-9fff22f7546d","Type":"ContainerStarted","Data":"66c46e2970936047cd492a86855b7dfda0af974a1f7369abcfb92abb63186c7a"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.660112 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvghl\" (UniqueName: \"kubernetes.io/projected/edeaef2c-0b5f-4448-a890-764774c8ff03-kube-api-access-qvghl\") pod \"package-server-manager-789f6589d5-hgtxv\" (UID: \"edeaef2c-0b5f-4448-a890-764774c8ff03\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.665815 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" event={"ID":"92757cb7-5e41-4c2d-bbdf-0e4010e4611d","Type":"ContainerStarted","Data":"0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.665848 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.665858 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" event={"ID":"92757cb7-5e41-4c2d-bbdf-0e4010e4611d","Type":"ContainerStarted","Data":"53b5888c06d75276ec1154090765c765987de449f501b8cfa950546c58f4dcb5"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.667504 4758 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9kt48 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.667539 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" podUID="deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.686110 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.700570 4758 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ssdl5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.701439 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0d98b4c-0d9c-4a9a-af05-1def738f8293-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.700648 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" podUID="92757cb7-5e41-4c2d-bbdf-0e4010e4611d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.718050 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbdrv\" (UniqueName: \"kubernetes.io/projected/24c14c8a-2e57-452b-b70b-646c1e2bac06-kube-api-access-cbdrv\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.718419 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.728185 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.731157 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.747811 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.247762942 +0000 UTC m=+152.220074493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.748322 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.750802 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8frm7\" (UniqueName: \"kubernetes.io/projected/9313ed67-0218-4d32-adf7-710ba67de622-kube-api-access-8frm7\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.759907 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.259866822 +0000 UTC m=+152.232178553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.770308 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.770747 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.781523 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6dv4\" (UniqueName: \"kubernetes.io/projected/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-kube-api-access-z6dv4\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.790781 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.809303 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bstw8\" (UniqueName: \"kubernetes.io/projected/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-kube-api-access-bstw8\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.828091 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7866g\" (UniqueName: \"kubernetes.io/projected/67466d94-68c1-4700-aec7-f2dd533b2fd6-kube-api-access-7866g\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.828970 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc7f2\" (UniqueName: \"kubernetes.io/projected/25e6f6dd-4791-48ca-a614-928eb2fd6886-kube-api-access-rc7f2\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.844790 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.866675 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.871958 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.872220 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99426938-7a55-4e2a-8ded-c683fe91d54d-cert\") pod \"ingress-canary-sgrqx\" (UID: \"99426938-7a55-4e2a-8ded-c683fe91d54d\") " pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.877511 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.377478634 +0000 UTC m=+152.349790185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.877806 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.878820 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97djv\" (UniqueName: \"kubernetes.io/projected/49a66643-dc8c-4c84-9345-7f98676dc1d3-kube-api-access-97djv\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.893821 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.898950 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.905695 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99426938-7a55-4e2a-8ded-c683fe91d54d-cert\") pod \"ingress-canary-sgrqx\" (UID: \"99426938-7a55-4e2a-8ded-c683fe91d54d\") " pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.912429 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/102f0f42-f8c6-4e98-9e96-1659a0a62c50-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.915492 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.945531 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.957941 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5pbl\" (UniqueName: \"kubernetes.io/projected/102f0f42-f8c6-4e98-9e96-1659a0a62c50-kube-api-access-c5pbl\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.962040 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whl5r\" (UniqueName: \"kubernetes.io/projected/125e1dd5-1556-4334-86d4-3c45fa9e833d-kube-api-access-whl5r\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.962251 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.970139 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.973304 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.973596 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.473582568 +0000 UTC m=+152.445894119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.984222 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8zkrv"] Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.988115 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqv4r\" (UniqueName: \"kubernetes.io/projected/3934d9aa-0054-4ed3-a2e8-a57dc60dad77-kube-api-access-wqv4r\") pod \"control-plane-machine-set-operator-78cbb6b69f-hd9qf\" (UID: \"3934d9aa-0054-4ed3-a2e8-a57dc60dad77\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.995762 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj26n\" (UniqueName: \"kubernetes.io/projected/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-kube-api-access-jj26n\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.996409 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.999384 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8kdk\" (UniqueName: \"kubernetes.io/projected/6e70d9dc-9e4c-45a3-b8d3-046067b91297-kube-api-access-m8kdk\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.004633 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.032319 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.045211 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.045489 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msbbw\" (UniqueName: \"kubernetes.io/projected/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-kube-api-access-msbbw\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.064927 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc7b7\" (UniqueName: \"kubernetes.io/projected/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-kube-api-access-rc7b7\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.074545 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.074988 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.574970619 +0000 UTC m=+152.547282170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.075190 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.105464 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj"] Jan 30 08:32:27 crc kubenswrapper[4758]: W0130 08:32:27.112921 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88956ba5_c91f_435b_94fa_d639c87311f3.slice/crio-99795367fa6b047ecf5465284dd05075132c3face63c31527ebb7d94b5b9b895 WatchSource:0}: Error finding container 99795367fa6b047ecf5465284dd05075132c3face63c31527ebb7d94b5b9b895: Status 404 returned error can't find the container with id 99795367fa6b047ecf5465284dd05075132c3face63c31527ebb7d94b5b9b895 Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.154268 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.170426 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.176914 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.177232 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.677220867 +0000 UTC m=+152.649532418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.188339 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.204311 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.220151 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.220784 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p"] Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.228988 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.257358 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.279451 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.279889 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.779874617 +0000 UTC m=+152.752186168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.388460 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.389052 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.889040873 +0000 UTC m=+152.861352424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.493581 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.494124 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.99411026 +0000 UTC m=+152.966421811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.494177 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.494517 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.994510062 +0000 UTC m=+152.966821613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.513910 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7trvn"] Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.513953 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zp6d8"] Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.595984 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.596334 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.096319406 +0000 UTC m=+153.068630957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.698567 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.698840 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.198829862 +0000 UTC m=+153.171141413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.700041 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ggszh"] Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.718525 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-z8tqh"] Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.777396 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rmc6n" event={"ID":"6df11515-6ad6-40b0-bd21-fc92e2eaeca6","Type":"ContainerStarted","Data":"027cee00e588ff30f684eca9f769310f1c5539ec18cd0e62c55af4a496308162"} Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.777444 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rmc6n" event={"ID":"6df11515-6ad6-40b0-bd21-fc92e2eaeca6","Type":"ContainerStarted","Data":"d2838d70732b53ccb10c2f8543a7fba52b2a3ac5b91e8a08fa9ad1f30a101f82"} Jan 30 08:32:27 crc kubenswrapper[4758]: W0130 08:32:27.785986 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67466d94_68c1_4700_aec7_f2dd533b2fd6.slice/crio-089cb1412abf62b4c1a83080d4109a4301358111e990968640ace5383529ca46 WatchSource:0}: Error finding container 089cb1412abf62b4c1a83080d4109a4301358111e990968640ace5383529ca46: Status 404 returned error can't find the container with id 089cb1412abf62b4c1a83080d4109a4301358111e990968640ace5383529ca46 Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.799278 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.799695 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.299682796 +0000 UTC m=+153.271994347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: W0130 08:32:27.830739 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0313d23_ff69_4957_ad3b_d6adc246aad5.slice/crio-2730638fa8d139a5a958673467f136fd211540da1ea032cd24098ad29a8f4782 WatchSource:0}: Error finding container 2730638fa8d139a5a958673467f136fd211540da1ea032cd24098ad29a8f4782: Status 404 returned error can't find the container with id 2730638fa8d139a5a958673467f136fd211540da1ea032cd24098ad29a8f4782 Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.900189 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.901371 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.401359475 +0000 UTC m=+153.373671016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.911328 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" podStartSLOduration=132.911313919 podStartE2EDuration="2m12.911313919s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:27.910437761 +0000 UTC m=+152.882749312" watchObservedRunningTime="2026-01-30 08:32:27.911313919 +0000 UTC m=+152.883625470" Jan 30 08:32:27 crc kubenswrapper[4758]: W0130 08:32:27.947268 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e5a4d53_1458_40fc_9171_b7cac79f3b8a.slice/crio-67a44be726bba18c108c73051f4b9c6694088951f512f25a0c42733387283e06 WatchSource:0}: Error finding container 67a44be726bba18c108c73051f4b9c6694088951f512f25a0c42733387283e06: Status 404 returned error can't find the container with id 67a44be726bba18c108c73051f4b9c6694088951f512f25a0c42733387283e06 Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.964845 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" podStartSLOduration=132.964831013 podStartE2EDuration="2m12.964831013s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:27.964238965 +0000 UTC m=+152.936550516" watchObservedRunningTime="2026-01-30 08:32:27.964831013 +0000 UTC m=+152.937142564" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.975303 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.975351 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.975367 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" event={"ID":"755faa64-0182-4450-bd27-cb87446008d8","Type":"ContainerStarted","Data":"bab4ef183c1bfe066d408955a7bfd2317902f8d27cc1d2c37290f64b834a1c8d"} Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.975382 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" event={"ID":"359e47a9-5633-496e-9522-d7c522c674bf","Type":"ContainerStarted","Data":"e085a72720025a5a5a5147e7d06c317a2722fa31e536a60b9df5b703f60e101a"} Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.975398 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" event={"ID":"f452c53b-893b-4060-b573-595e98576792","Type":"ContainerStarted","Data":"9124a9f0dcfb742c66a7275135142d0fa2cc67d375d53d7b9b9183ad624d6aa7"} Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.975408 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kdljh" event={"ID":"88956ba5-c91f-435b-94fa-d639c87311f3","Type":"ContainerStarted","Data":"99795367fa6b047ecf5465284dd05075132c3face63c31527ebb7d94b5b9b895"} Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.975418 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" event={"ID":"becac525-d7ec-48b6-9f52-3b7ca1606e50","Type":"ContainerStarted","Data":"34e55fb62fbaa6037b0cd2380a9181c8df85e59163d0082d143ec6c76f014828"} Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.003115 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.004592 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.504575774 +0000 UTC m=+153.476887325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.050991 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hlm86"] Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.106032 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.119543 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.619522622 +0000 UTC m=+153.591834173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.129172 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7h2zc"] Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.153119 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" podStartSLOduration=133.153092708 podStartE2EDuration="2m13.153092708s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:28.151434696 +0000 UTC m=+153.123746247" watchObservedRunningTime="2026-01-30 08:32:28.153092708 +0000 UTC m=+153.125404259" Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.212226 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.212569 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.712552459 +0000 UTC m=+153.684864010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.316355 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.316644 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.816633414 +0000 UTC m=+153.788944955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.423599 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.423910 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.92389531 +0000 UTC m=+153.896206861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.519817 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.520006 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.520034 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.524735 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.524988 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.024978832 +0000 UTC m=+153.997290383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.625767 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.625942 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.125915688 +0000 UTC m=+154.098227239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.626347 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.626657 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.126650371 +0000 UTC m=+154.098961922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.679947 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" podStartSLOduration=133.679931697 podStartE2EDuration="2m13.679931697s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:28.678552234 +0000 UTC m=+153.650863775" watchObservedRunningTime="2026-01-30 08:32:28.679931697 +0000 UTC m=+153.652243248" Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.731757 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.732149 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.232133611 +0000 UTC m=+154.204445162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.843327 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.843598 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.343587928 +0000 UTC m=+154.315899479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.864293 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6"] Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.880201 4758 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ssdl5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.880256 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" podUID="92757cb7-5e41-4c2d-bbdf-0e4010e4611d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.897849 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-rxgh6" podStartSLOduration=133.897830575 podStartE2EDuration="2m13.897830575s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:28.896769302 +0000 UTC m=+153.869080863" watchObservedRunningTime="2026-01-30 08:32:28.897830575 +0000 UTC m=+153.870142126" Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.900512 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" event={"ID":"2950f4ab-791b-4190-9455-14e34e95f22d","Type":"ContainerStarted","Data":"65eeb97e2b5d0d3c9a9003de3b0afba3d33526702e9a55731b9f89804f91d5cd"} Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.949925 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.949985 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.449962516 +0000 UTC m=+154.422274067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.953319 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.953780 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.453756675 +0000 UTC m=+154.426068226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.962733 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" event={"ID":"67466d94-68c1-4700-aec7-f2dd533b2fd6","Type":"ContainerStarted","Data":"089cb1412abf62b4c1a83080d4109a4301358111e990968640ace5383529ca46"} Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.007044 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" podStartSLOduration=134.007023322 podStartE2EDuration="2m14.007023322s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:28.994037293 +0000 UTC m=+153.966348854" watchObservedRunningTime="2026-01-30 08:32:29.007023322 +0000 UTC m=+153.979334873" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.009286 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.011437 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" event={"ID":"8e5a4d53-1458-40fc-9171-b7cac79f3b8a","Type":"ContainerStarted","Data":"67a44be726bba18c108c73051f4b9c6694088951f512f25a0c42733387283e06"} Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.035999 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" podStartSLOduration=134.035981313 podStartE2EDuration="2m14.035981313s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:29.035817307 +0000 UTC m=+154.008128868" watchObservedRunningTime="2026-01-30 08:32:29.035981313 +0000 UTC m=+154.008292864" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.062846 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.063190 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.563164438 +0000 UTC m=+154.535475989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.072526 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.072992 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.572978717 +0000 UTC m=+154.545290268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: W0130 08:32:29.073394 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod830a2e4b_3e0d_409e_9a0b_bf503e81d5e0.slice/crio-2f3dc53d89327615e58e218d1a878046e7e1ac63b8746028d4ac3a7ee9ed7eae WatchSource:0}: Error finding container 2f3dc53d89327615e58e218d1a878046e7e1ac63b8746028d4ac3a7ee9ed7eae: Status 404 returned error can't find the container with id 2f3dc53d89327615e58e218d1a878046e7e1ac63b8746028d4ac3a7ee9ed7eae Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.073881 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" event={"ID":"755faa64-0182-4450-bd27-cb87446008d8","Type":"ContainerStarted","Data":"77ae92fedd29f08775c6b25a7d81b674b9255c4e65de2a78e169541c80f21afe"} Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.074911 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.077639 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zp6d8" event={"ID":"b0313d23-ff69-4957-ad3b-d6adc246aad5","Type":"ContainerStarted","Data":"2730638fa8d139a5a958673467f136fd211540da1ea032cd24098ad29a8f4782"} Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.078365 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zp6d8" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.099983 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-zp6d8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.100027 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zp6d8" podUID="b0313d23-ff69-4957-ad3b-d6adc246aad5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.100100 4758 patch_prober.go:28] interesting pod/console-operator-58897d9998-8zkrv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.100176 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" podUID="755faa64-0182-4450-bd27-cb87446008d8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.170227 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.175570 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.176287 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.676260187 +0000 UTC m=+154.648571788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.187383 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" podStartSLOduration=134.187369877 podStartE2EDuration="2m14.187369877s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:29.186362605 +0000 UTC m=+154.158674156" watchObservedRunningTime="2026-01-30 08:32:29.187369877 +0000 UTC m=+154.159681428" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.187979 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" event={"ID":"becac525-d7ec-48b6-9f52-3b7ca1606e50","Type":"ContainerStarted","Data":"52039d01ec41c3fcd8336f14b78c18cb1473517434254dc585712312c0cdd85b"} Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.219057 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kdljh" event={"ID":"88956ba5-c91f-435b-94fa-d639c87311f3","Type":"ContainerStarted","Data":"7c10964b7c51d2fa7c5b8a770655e27606a8f6bc504fd63be02688a95bb450b2"} Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.221897 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" event={"ID":"24c14c8a-2e57-452b-b70b-646c1e2bac06","Type":"ContainerStarted","Data":"9a82ed84260fe27d9b5c28dfc49599735d9ee49b89213c146cfb4c0e717cde1b"} Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.296744 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.298444 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.798432452 +0000 UTC m=+154.770744003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.312016 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.319195 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ggszh" event={"ID":"2fb1701d-491e-479d-a12b-5af9e40e2be5","Type":"ContainerStarted","Data":"b960a62c68b51fea3331be373da3df05c882404601ee0fe2885cdfb86e0fffcf"} Jan 30 08:32:29 crc kubenswrapper[4758]: W0130 08:32:29.388129 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fc75b96_45f4_4639_ab9b_d95a9e3ef03c.slice/crio-6ec9ae04020b17c09c6dcab68264d83460bb284d6b13f90502de569f915c7583 WatchSource:0}: Error finding container 6ec9ae04020b17c09c6dcab68264d83460bb284d6b13f90502de569f915c7583: Status 404 returned error can't find the container with id 6ec9ae04020b17c09c6dcab68264d83460bb284d6b13f90502de569f915c7583 Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.392941 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.401549 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.402186 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.902169596 +0000 UTC m=+154.874481157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.423118 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" event={"ID":"d03a1e8b-8151-4fb9-8a25-56e567566244","Type":"ContainerStarted","Data":"5369612da68c4f4da60ef852fd60082f5798de176202520b18035a432650b161"} Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.440707 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" podStartSLOduration=134.440682899 podStartE2EDuration="2m14.440682899s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:29.387962419 +0000 UTC m=+154.360273970" watchObservedRunningTime="2026-01-30 08:32:29.440682899 +0000 UTC m=+154.412994450" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.460209 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.479490 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z9hfd"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.510712 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.514027 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.014013956 +0000 UTC m=+154.986325507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.545415 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:29 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:29 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:29 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.545463 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.615895 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.616354 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.116338127 +0000 UTC m=+155.088649678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.633207 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.633250 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.686567 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kdljh" podStartSLOduration=5.686553197 podStartE2EDuration="5.686553197s" podCreationTimestamp="2026-01-30 08:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:29.686123163 +0000 UTC m=+154.658434724" watchObservedRunningTime="2026-01-30 08:32:29.686553197 +0000 UTC m=+154.658864748" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.691947 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-sgrqx"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.716907 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.718928 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.218917745 +0000 UTC m=+155.191229296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.817511 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.817952 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.317937951 +0000 UTC m=+155.290249502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.886017 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-rmc6n" podStartSLOduration=134.886002384 podStartE2EDuration="2m14.886002384s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:29.76259307 +0000 UTC m=+154.734904621" watchObservedRunningTime="2026-01-30 08:32:29.886002384 +0000 UTC m=+154.858313955" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.889329 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.907110 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.922008 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.922319 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.422307856 +0000 UTC m=+155.394619407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.922423 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57"] Jan 30 08:32:29 crc kubenswrapper[4758]: W0130 08:32:29.951612 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9313ed67_0218_4d32_adf7_710ba67de622.slice/crio-b859f510e54f07bd33d4310127d5c0dd60f6d82ea983d54ae89524877d89e31a WatchSource:0}: Error finding container b859f510e54f07bd33d4310127d5c0dd60f6d82ea983d54ae89524877d89e31a: Status 404 returned error can't find the container with id b859f510e54f07bd33d4310127d5c0dd60f6d82ea983d54ae89524877d89e31a Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.973182 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5"] Jan 30 08:32:30 crc kubenswrapper[4758]: W0130 08:32:30.007285 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79889e0a_985e_4bcc_bdfe_2160f05a5bfe.slice/crio-8c8ec759ce0dce069d837078b460f7c784d5ea5edc535ab8b2efbb548ded61df WatchSource:0}: Error finding container 8c8ec759ce0dce069d837078b460f7c784d5ea5edc535ab8b2efbb548ded61df: Status 404 returned error can't find the container with id 8c8ec759ce0dce069d837078b460f7c784d5ea5edc535ab8b2efbb548ded61df Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.026726 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.027134 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.527119704 +0000 UTC m=+155.499431245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.031831 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" podStartSLOduration=135.031817171 podStartE2EDuration="2m15.031817171s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.029986774 +0000 UTC m=+155.002298325" watchObservedRunningTime="2026-01-30 08:32:30.031817171 +0000 UTC m=+155.004128722" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.150375 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.150698 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.650686293 +0000 UTC m=+155.622997844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.175759 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9"] Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.235973 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7mbqg"] Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.239878 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr"] Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.239973 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf"] Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.251026 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.251307 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.751287699 +0000 UTC m=+155.723599250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.252903 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t"] Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.253831 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq"] Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.272053 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h"] Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.322139 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-zp6d8" podStartSLOduration=135.322122578 podStartE2EDuration="2m15.322122578s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.320093405 +0000 UTC m=+155.292404966" watchObservedRunningTime="2026-01-30 08:32:30.322122578 +0000 UTC m=+155.294434129" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.359902 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.360246 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.860234647 +0000 UTC m=+155.832546198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: W0130 08:32:30.365765 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e70d9dc_9e4c_45a3_b8d3_046067b91297.slice/crio-f457221aca8950e5c3c7220712543258375d6644522f27fdf5d270217a8137dc WatchSource:0}: Error finding container f457221aca8950e5c3c7220712543258375d6644522f27fdf5d270217a8137dc: Status 404 returned error can't find the container with id f457221aca8950e5c3c7220712543258375d6644522f27fdf5d270217a8137dc Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.414647 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" podStartSLOduration=135.414627169 podStartE2EDuration="2m15.414627169s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.411596884 +0000 UTC m=+155.383908445" watchObservedRunningTime="2026-01-30 08:32:30.414627169 +0000 UTC m=+155.386938720" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.460558 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" event={"ID":"33e0d6c9-e5b8-478c-80f0-ccab7c303a93","Type":"ContainerStarted","Data":"effa74be220146754f2c2e16c4e4f6dfc5cc70b318cba09cb2110b4367512d45"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.461175 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" event={"ID":"33e0d6c9-e5b8-478c-80f0-ccab7c303a93","Type":"ContainerStarted","Data":"252a35670206bec95d60195729d035684c6c8df81f48cc65dca52e863e8046c3"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.461956 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.464805 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.464922 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.964905122 +0000 UTC m=+155.937216663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.465270 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.465460 4758 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5zgww container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.466149 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" podUID="33e0d6c9-e5b8-478c-80f0-ccab7c303a93" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.467291 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.967281016 +0000 UTC m=+155.939592567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.472913 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" event={"ID":"d03a1e8b-8151-4fb9-8a25-56e567566244","Type":"ContainerStarted","Data":"a7a8d3d43d0f856c1f30a0258b234c1a8a4f26c427e7e8572383e79154da54bf"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.486929 4758 patch_prober.go:28] interesting pod/apiserver-76f77b778f-hrqb6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]log ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]etcd ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/max-in-flight-filter ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 30 08:32:30 crc kubenswrapper[4758]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 30 08:32:30 crc kubenswrapper[4758]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/project.openshift.io-projectcache ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/openshift.io-startinformers ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 08:32:30 crc kubenswrapper[4758]: livez check failed Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.486974 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" podUID="814885ca-d12b-49a3-a788-4648517a1c23" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.507775 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" event={"ID":"e6e6799e-f1ee-4eee-a1f6-e69a09d888af","Type":"ContainerStarted","Data":"58c8f78fa7843499a864cc836820fe2d2f5ab67c5729acfa3cb79814b0043da2"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.514181 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zp6d8" event={"ID":"b0313d23-ff69-4957-ad3b-d6adc246aad5","Type":"ContainerStarted","Data":"c1f552c2b93e9dfdfc6f667efdd223eb36ab22432f177677dd9d6206b07dd723"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.515096 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-zp6d8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.515128 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zp6d8" podUID="b0313d23-ff69-4957-ad3b-d6adc246aad5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.535014 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:30 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:30 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:30 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.535071 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.545299 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" event={"ID":"becac525-d7ec-48b6-9f52-3b7ca1606e50","Type":"ContainerStarted","Data":"58f5148a773e1bc4a8bd30edf666d5c87aa78d88431bd0faeeacaec7c8358280"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.560059 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ggszh" event={"ID":"2fb1701d-491e-479d-a12b-5af9e40e2be5","Type":"ContainerStarted","Data":"30c99a223e21f21e5df2ed1ea4b42c8ef4fddc223a105b29b7195b5c4302218f"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.567617 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.567896 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.067871142 +0000 UTC m=+156.040182693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.568004 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.568233 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.068224633 +0000 UTC m=+156.040536184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.585420 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" event={"ID":"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d","Type":"ContainerStarted","Data":"170de2f1ee8573ed577993a52512dda919e22b4cb9983026b86fbb94a390fe29"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.591489 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-sgrqx" event={"ID":"99426938-7a55-4e2a-8ded-c683fe91d54d","Type":"ContainerStarted","Data":"cbb4c0871940303068cc4f60338e0c2880bcebcc879d1152607fdb20b06b3330"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.633695 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" podStartSLOduration=135.633677913 podStartE2EDuration="2m15.633677913s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.578349121 +0000 UTC m=+155.550660662" watchObservedRunningTime="2026-01-30 08:32:30.633677913 +0000 UTC m=+155.605989464" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.634454 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" podStartSLOduration=135.634444537 podStartE2EDuration="2m15.634444537s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.626582039 +0000 UTC m=+155.598893590" watchObservedRunningTime="2026-01-30 08:32:30.634444537 +0000 UTC m=+155.606756088" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.637961 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" event={"ID":"125e1dd5-1556-4334-86d4-3c45fa9e833d","Type":"ContainerStarted","Data":"5cb440fadcf919f6d9e6f6fde00a6a783c12aa736ba0426159f4f58bf9f5e110"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.662718 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" event={"ID":"edeaef2c-0b5f-4448-a890-764774c8ff03","Type":"ContainerStarted","Data":"b7f69df2279f6cc4b85befdef187b0e2b509848da0e3183d7fb2f1b2489614aa"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.669766 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.671152 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.171112391 +0000 UTC m=+156.143423952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.689161 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-sgrqx" podStartSLOduration=6.689142488 podStartE2EDuration="6.689142488s" podCreationTimestamp="2026-01-30 08:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.68316993 +0000 UTC m=+155.655481481" watchObservedRunningTime="2026-01-30 08:32:30.689142488 +0000 UTC m=+155.661454039" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.709134 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" event={"ID":"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c","Type":"ContainerStarted","Data":"ffb191afec5e60cd855337ae357f1de80f68393da4a5b9975604b7f0718bb8ff"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.710527 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.710635 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" event={"ID":"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c","Type":"ContainerStarted","Data":"6ec9ae04020b17c09c6dcab68264d83460bb284d6b13f90502de569f915c7583"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.743701 4758 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mn2qr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.743964 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" podUID="7fc75b96-45f4-4639-ab9b-d95a9e3ef03c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.744264 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" event={"ID":"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6","Type":"ContainerStarted","Data":"842c29b8c2a6c9ba3918b428b3bbadd1c21cabb34d92622368e828cc29436219"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.753989 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" event={"ID":"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426","Type":"ContainerStarted","Data":"c994373a7f6a806a97afefe9b99eb54de8568f3c633d97ac26c7a9f4dd767d8a"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.754244 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" event={"ID":"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426","Type":"ContainerStarted","Data":"ccad1cf51c4539b41486b84adc37adb3b716e219daf9df2bb5867e3eb8d2394d"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.772991 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.774830 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.274818724 +0000 UTC m=+156.247130265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.775313 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" podStartSLOduration=135.775294089 podStartE2EDuration="2m15.775294089s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.766860154 +0000 UTC m=+155.739171705" watchObservedRunningTime="2026-01-30 08:32:30.775294089 +0000 UTC m=+155.747605640" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.782911 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" event={"ID":"2950f4ab-791b-4190-9455-14e34e95f22d","Type":"ContainerStarted","Data":"a2b3795b0982b7ccd4158d3964622ced201c6d345378e7a4d6d90b9046af05fb"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.791226 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" event={"ID":"6e70d9dc-9e4c-45a3-b8d3-046067b91297","Type":"ContainerStarted","Data":"f457221aca8950e5c3c7220712543258375d6644522f27fdf5d270217a8137dc"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.841740 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" podStartSLOduration=135.84172473 podStartE2EDuration="2m15.84172473s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.825570752 +0000 UTC m=+155.797882303" watchObservedRunningTime="2026-01-30 08:32:30.84172473 +0000 UTC m=+155.814036281" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.878427 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.879026 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.379007493 +0000 UTC m=+156.351319044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.921989 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" event={"ID":"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0","Type":"ContainerStarted","Data":"2f3dc53d89327615e58e218d1a878046e7e1ac63b8746028d4ac3a7ee9ed7eae"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.940825 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" event={"ID":"a0d98b4c-0d9c-4a9a-af05-1def738f8293","Type":"ContainerStarted","Data":"57570cc3af6b3d405efbc7e2a23883cd7eda165d89dbaac8c596511a65483a2b"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.942861 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" event={"ID":"67466d94-68c1-4700-aec7-f2dd533b2fd6","Type":"ContainerStarted","Data":"d5b7569e27f5ba49ad012572cbbd873cabcd83a8b61675d946ee477f6f86ca74"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.943698 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" event={"ID":"9313ed67-0218-4d32-adf7-710ba67de622","Type":"ContainerStarted","Data":"b859f510e54f07bd33d4310127d5c0dd60f6d82ea983d54ae89524877d89e31a"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.980213 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.981910 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.481895841 +0000 UTC m=+156.454207402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.983676 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" podStartSLOduration=135.983663866 podStartE2EDuration="2m15.983663866s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.912699664 +0000 UTC m=+155.885011215" watchObservedRunningTime="2026-01-30 08:32:30.983663866 +0000 UTC m=+155.955975417" Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.005529 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" event={"ID":"102f0f42-f8c6-4e98-9e96-1659a0a62c50","Type":"ContainerStarted","Data":"40b4e95db369555595f1d5a15ac2210050253e33dfdb6de4c72251cb78b25d65"} Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.034455 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" event={"ID":"8e5a4d53-1458-40fc-9171-b7cac79f3b8a","Type":"ContainerStarted","Data":"1da427560a656aeb1dfdd7493b10ca62097f3619d9f72f044254b16635972ca4"} Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.076400 4758 generic.go:334] "Generic (PLEG): container finished" podID="359e47a9-5633-496e-9522-d7c522c674bf" containerID="ad4eccad36bcdea74c074cd6aae977a69767b0d379d8c00133cc5aee261fad95" exitCode=0 Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.077578 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" event={"ID":"359e47a9-5633-496e-9522-d7c522c674bf","Type":"ContainerDied","Data":"ad4eccad36bcdea74c074cd6aae977a69767b0d379d8c00133cc5aee261fad95"} Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.081653 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" podStartSLOduration=136.081524547 podStartE2EDuration="2m16.081524547s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.991657689 +0000 UTC m=+155.963969260" watchObservedRunningTime="2026-01-30 08:32:31.081524547 +0000 UTC m=+156.053836098" Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.082614 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" podStartSLOduration=136.08260766 podStartE2EDuration="2m16.08260766s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:31.078567794 +0000 UTC m=+156.050879355" watchObservedRunningTime="2026-01-30 08:32:31.08260766 +0000 UTC m=+156.054919211" Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.082936 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.084067 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.584030466 +0000 UTC m=+156.556342017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.154527 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" event={"ID":"a06bcc27-063e-4acc-942d-78594f88fd2c","Type":"ContainerStarted","Data":"06020561e5deb502ae5c7cf36ab050546475bd4c7c499a3d34bc9b22ec452b2f"} Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.165699 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" event={"ID":"3934d9aa-0054-4ed3-a2e8-a57dc60dad77","Type":"ContainerStarted","Data":"4c5f19166eb64ee3da4d2e45ebea96ee6f292f6ea164dfa9965df9b0c9d79d2d"} Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.181827 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" event={"ID":"25e6f6dd-4791-48ca-a614-928eb2fd6886","Type":"ContainerStarted","Data":"b7901fea84f678d8e3dba057af8b1d3d85c65dfbabfd6582ec3e254faa0116c8"} Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.189636 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.199503 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.699484548 +0000 UTC m=+156.671796099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.216402 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" event={"ID":"79889e0a-985e-4bcc-bdfe-2160f05a5bfe","Type":"ContainerStarted","Data":"8c8ec759ce0dce069d837078b460f7c784d5ea5edc535ab8b2efbb548ded61df"} Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.232133 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" event={"ID":"49a66643-dc8c-4c84-9345-7f98676dc1d3","Type":"ContainerStarted","Data":"4118893c53a7f8a706c32c2a7e6e6021db4c895afd2c8cfa46828333ae413f1b"} Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.290860 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.291014 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.790988909 +0000 UTC m=+156.763300460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.291492 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.292520 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.792508246 +0000 UTC m=+156.764819797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.393082 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.393470 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.893454073 +0000 UTC m=+156.865765614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.495113 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.495444 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.995433663 +0000 UTC m=+156.967745214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.523238 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:31 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:31 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:31 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.523275 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.574577 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.596380 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.597959 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.097943858 +0000 UTC m=+157.070255409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.698372 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.698776 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.198761671 +0000 UTC m=+157.171073222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.799892 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.800676 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.300654538 +0000 UTC m=+157.272966089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.907678 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.907953 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.407942614 +0000 UTC m=+157.380254165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.008689 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.008928 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.50886558 +0000 UTC m=+157.481177131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.009133 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.009467 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.509455199 +0000 UTC m=+157.481766750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.110808 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.111328 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.611307144 +0000 UTC m=+157.583618695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.212100 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.212397 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.712386235 +0000 UTC m=+157.684697786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.232852 4758 patch_prober.go:28] interesting pod/console-operator-58897d9998-8zkrv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.232966 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" podUID="755faa64-0182-4450-bd27-cb87446008d8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.242614 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" event={"ID":"359e47a9-5633-496e-9522-d7c522c674bf","Type":"ContainerStarted","Data":"6aba5664cb09159759cd5d8841f99ea17c721b3611a880f871532b6c6ebe3b5e"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.248093 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" event={"ID":"125e1dd5-1556-4334-86d4-3c45fa9e833d","Type":"ContainerStarted","Data":"7a8b363ad5ba1b0c008d5e6cad6193282a304f4608b2f2a252bb6bbf22e0c05f"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.249797 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" event={"ID":"e6e6799e-f1ee-4eee-a1f6-e69a09d888af","Type":"ContainerStarted","Data":"5a5a369ae01206b2d34799be316b979b0dd43d291661e70240ebd17ae9b074c6"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.253413 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" event={"ID":"49a66643-dc8c-4c84-9345-7f98676dc1d3","Type":"ContainerStarted","Data":"00e032b52468c18324073e8f68c53a2950d0c1e9e92eb63a8cddb5aaf6d5f40e"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.253729 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.255428 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" event={"ID":"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6","Type":"ContainerStarted","Data":"0453f5c082f2805fe108f737e7128303c3c9f6220d9d6351cc049efe3024c7c4"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.255486 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" event={"ID":"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6","Type":"ContainerStarted","Data":"cc75227496b8c85356982be1fc89439666a80a5129668763b18a227a1b00a83d"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.255611 4758 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7mbqg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.255671 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.262251 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" event={"ID":"a06bcc27-063e-4acc-942d-78594f88fd2c","Type":"ContainerStarted","Data":"bbd2dbeed79fc8e831ae3bfcb2b13e43f2d3dae0048722fd165a3e1601f13b53"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.262497 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" event={"ID":"a06bcc27-063e-4acc-942d-78594f88fd2c","Type":"ContainerStarted","Data":"7ed63e3c4561753b2244e5f9b388437827771e38d4343b2d7250ccab1dd0d25e"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.275539 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" podStartSLOduration=137.275506181 podStartE2EDuration="2m17.275506181s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.27418956 +0000 UTC m=+157.246501111" watchObservedRunningTime="2026-01-30 08:32:32.275506181 +0000 UTC m=+157.247817722" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.275689 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" event={"ID":"6e70d9dc-9e4c-45a3-b8d3-046067b91297","Type":"ContainerStarted","Data":"93e0ab666029780fad5931ac542e70fd9dfecb9bf8d18d35d3bf794c65d34786"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.277546 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.277582 4758 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-th8pq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.277825 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" podUID="6e70d9dc-9e4c-45a3-b8d3-046067b91297" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.288087 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" event={"ID":"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0","Type":"ContainerStarted","Data":"6016ffe9e47e855f6cf2cedda9df7e72d0996ed1ecefb2107352419a6f6fab21"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.322794 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.325724 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" podStartSLOduration=137.325706141 podStartE2EDuration="2m17.325706141s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.300478097 +0000 UTC m=+157.272789648" watchObservedRunningTime="2026-01-30 08:32:32.325706141 +0000 UTC m=+157.298017692" Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.329715 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.829693147 +0000 UTC m=+157.802004698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.323027 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ggszh" event={"ID":"2fb1701d-491e-479d-a12b-5af9e40e2be5","Type":"ContainerStarted","Data":"d41cd86d54048a93d1d7820887e6dc1684ec4ea5df97a11b60c1915b01803ccd"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.337461 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.352447 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" event={"ID":"67466d94-68c1-4700-aec7-f2dd533b2fd6","Type":"ContainerStarted","Data":"b87b3c683082f67bae835a5cf24a8994a7a91bf697c96b217474d6a769ccbb1b"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.354852 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" podStartSLOduration=137.354840668 podStartE2EDuration="2m17.354840668s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.329444699 +0000 UTC m=+157.301756260" watchObservedRunningTime="2026-01-30 08:32:32.354840668 +0000 UTC m=+157.327152219" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.381852 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" event={"ID":"9313ed67-0218-4d32-adf7-710ba67de622","Type":"ContainerStarted","Data":"166bda0445ccb2dd7e9715331b62d1e0995dd20e40dce82e327a772dd4e0caf9"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.384939 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" event={"ID":"102f0f42-f8c6-4e98-9e96-1659a0a62c50","Type":"ContainerStarted","Data":"d0aca64fbc3bfe7bbe5c39309610ec0ccb266abea7a26fb6fd1c5e964de159a9"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.389585 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" event={"ID":"d03a1e8b-8151-4fb9-8a25-56e567566244","Type":"ContainerStarted","Data":"c554ccfa8bb9db09f2eb5f749b13985fb4cdb27d39068e0027ba26e865e3c345"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.395565 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" event={"ID":"3934d9aa-0054-4ed3-a2e8-a57dc60dad77","Type":"ContainerStarted","Data":"8862f29e3a46d9917505d86ad0c5bd0140ddd010b0dca5c169a654101c5babf0"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.415920 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" podStartSLOduration=137.415875429 podStartE2EDuration="2m17.415875429s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.402020333 +0000 UTC m=+157.374331894" watchObservedRunningTime="2026-01-30 08:32:32.415875429 +0000 UTC m=+157.388186990" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.416724 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" podStartSLOduration=137.416718046 podStartE2EDuration="2m17.416718046s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.359477544 +0000 UTC m=+157.331789095" watchObservedRunningTime="2026-01-30 08:32:32.416718046 +0000 UTC m=+157.389029597" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.418340 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" event={"ID":"24c14c8a-2e57-452b-b70b-646c1e2bac06","Type":"ContainerStarted","Data":"16a432d54676052b9d25233e837560dc1a1609ad9c6b03b13f1da7c2a5d5db5b"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.427389 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-sgrqx" event={"ID":"99426938-7a55-4e2a-8ded-c683fe91d54d","Type":"ContainerStarted","Data":"3f5d9934ca5501836036e6fe57ea1ae093259aca2a0d45b91fcb66120bce9784"} Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.444777 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.944745928 +0000 UTC m=+157.917057479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.456953 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-ggszh" podStartSLOduration=8.456934321 podStartE2EDuration="8.456934321s" podCreationTimestamp="2026-01-30 08:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.453115131 +0000 UTC m=+157.425426692" watchObservedRunningTime="2026-01-30 08:32:32.456934321 +0000 UTC m=+157.429245862" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.442634 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.465900 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" event={"ID":"a0d98b4c-0d9c-4a9a-af05-1def738f8293","Type":"ContainerStarted","Data":"ae960977790be11ea097f997b4630865b4381fae1429d9f6f19c327a6f8c5e1d"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.482571 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" event={"ID":"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d","Type":"ContainerStarted","Data":"3ab0fe430a0ae3beefa28df4853e4e87f67e12b274152fc3cd02f29abfca8fdc"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.482842 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" event={"ID":"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d","Type":"ContainerStarted","Data":"1e112dd7f5bdc65162e7f5f8f1c8461f4827620ea6af4f19926fc082e5b43d6f"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.491642 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" event={"ID":"25e6f6dd-4791-48ca-a614-928eb2fd6886","Type":"ContainerStarted","Data":"e4533caa89600f686f8b8228b28367ada54e64482732c4e1f1b5a013fe03ec66"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.511744 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" event={"ID":"79889e0a-985e-4bcc-bdfe-2160f05a5bfe","Type":"ContainerStarted","Data":"a190aec5a59d2c6f066a4d09df352ffcbb867d0c1ad56c2ccb6bd8a1f97a8d0d"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.520490 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" podStartSLOduration=137.520460601 podStartE2EDuration="2m17.520460601s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.519830081 +0000 UTC m=+157.492141632" watchObservedRunningTime="2026-01-30 08:32:32.520460601 +0000 UTC m=+157.492772152" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.521992 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:32 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:32 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:32 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.522230 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.532583 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" event={"ID":"edeaef2c-0b5f-4448-a890-764774c8ff03","Type":"ContainerStarted","Data":"abfdaff97c89a9d31d5044850377bdff49bc11e6a6ff91ddc141bbf554d230a1"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.533864 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" event={"ID":"edeaef2c-0b5f-4448-a890-764774c8ff03","Type":"ContainerStarted","Data":"43602cbe1f45f39ae6b7eb5319d51c475098db3ab2450d8eab0a4797dcbc93ed"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.533983 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.534551 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-zp6d8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.534629 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zp6d8" podUID="b0313d23-ff69-4957-ad3b-d6adc246aad5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.552357 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.562382 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.564322 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.064282649 +0000 UTC m=+158.036594200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.608728 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" podStartSLOduration=137.608698737 podStartE2EDuration="2m17.608698737s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.606081555 +0000 UTC m=+157.578393116" watchObservedRunningTime="2026-01-30 08:32:32.608698737 +0000 UTC m=+157.581010278" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.609562 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" podStartSLOduration=137.609557294 podStartE2EDuration="2m17.609557294s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.569330808 +0000 UTC m=+157.541642379" watchObservedRunningTime="2026-01-30 08:32:32.609557294 +0000 UTC m=+157.581868845" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.656625 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" podStartSLOduration=137.656598735 podStartE2EDuration="2m17.656598735s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.653517758 +0000 UTC m=+157.625829319" watchObservedRunningTime="2026-01-30 08:32:32.656598735 +0000 UTC m=+157.628910286" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.664359 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.665947 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.165927199 +0000 UTC m=+158.138238750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.682061 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" podStartSLOduration=137.682020995 podStartE2EDuration="2m17.682020995s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.67962549 +0000 UTC m=+157.651937051" watchObservedRunningTime="2026-01-30 08:32:32.682020995 +0000 UTC m=+157.654332546" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.758610 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.768973 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.769524 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.269497018 +0000 UTC m=+158.241808569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.773155 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" podStartSLOduration=137.773137332 podStartE2EDuration="2m17.773137332s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.71873794 +0000 UTC m=+157.691049501" watchObservedRunningTime="2026-01-30 08:32:32.773137332 +0000 UTC m=+157.745448883" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.847403 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" podStartSLOduration=137.847388799 podStartE2EDuration="2m17.847388799s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.773624747 +0000 UTC m=+157.745936308" watchObservedRunningTime="2026-01-30 08:32:32.847388799 +0000 UTC m=+157.819700350" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.870834 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.871224 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.371212608 +0000 UTC m=+158.343524159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.906352 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" podStartSLOduration=137.906332344 podStartE2EDuration="2m17.906332344s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.874420079 +0000 UTC m=+157.846731640" watchObservedRunningTime="2026-01-30 08:32:32.906332344 +0000 UTC m=+157.878643915" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.907157 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" podStartSLOduration=137.907150609 podStartE2EDuration="2m17.907150609s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.90397536 +0000 UTC m=+157.876286911" watchObservedRunningTime="2026-01-30 08:32:32.907150609 +0000 UTC m=+157.879462160" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.962702 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" podStartSLOduration=137.962687288 podStartE2EDuration="2m17.962687288s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.960853629 +0000 UTC m=+157.933165200" watchObservedRunningTime="2026-01-30 08:32:32.962687288 +0000 UTC m=+157.934998839" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.963441 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" podStartSLOduration=137.96343535 podStartE2EDuration="2m17.96343535s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.940382236 +0000 UTC m=+157.912693787" watchObservedRunningTime="2026-01-30 08:32:32.96343535 +0000 UTC m=+157.935746901" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.972131 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.972337 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.472295919 +0000 UTC m=+158.444607470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.972432 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.972738 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.472726503 +0000 UTC m=+158.445038054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.074069 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.074228 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.574210017 +0000 UTC m=+158.546521578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.074276 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.074603 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.574592049 +0000 UTC m=+158.546903600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.175434 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.175619 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.675593808 +0000 UTC m=+158.647905359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.175728 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.176009 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.67599292 +0000 UTC m=+158.648304481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.277443 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.277620 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.777587877 +0000 UTC m=+158.749899428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.277754 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.278028 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.778015831 +0000 UTC m=+158.750327382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.378876 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.379266 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.879239246 +0000 UTC m=+158.851550797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.480828 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.481241 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.981225755 +0000 UTC m=+158.953537306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.518440 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.524219 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:33 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:33 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:33 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.524289 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.539652 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" event={"ID":"e6e6799e-f1ee-4eee-a1f6-e69a09d888af","Type":"ContainerStarted","Data":"78b0a0c8a3c021adc0df24768615664a4efc2db62fe5e9e206938b79b522ada8"} Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.541392 4758 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7mbqg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.541427 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.581621 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.581914 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.081890703 +0000 UTC m=+159.054202254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.582190 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.582518 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.082511063 +0000 UTC m=+159.054822614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.582956 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.683246 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.702499 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.202471848 +0000 UTC m=+159.174783399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.741467 4758 csr.go:261] certificate signing request csr-2b69t is approved, waiting to be issued Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.821979 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.822510 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.322491145 +0000 UTC m=+159.294802696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.864462 4758 csr.go:257] certificate signing request csr-2b69t is issued Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.865199 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" podStartSLOduration=138.865183389 podStartE2EDuration="2m18.865183389s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:33.736136868 +0000 UTC m=+158.708448419" watchObservedRunningTime="2026-01-30 08:32:33.865183389 +0000 UTC m=+158.837494940" Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.923769 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.923851 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.423834585 +0000 UTC m=+159.396146136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.924054 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.924444 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.424436144 +0000 UTC m=+159.396747695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.024808 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.025232 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.525217005 +0000 UTC m=+159.497528556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.126207 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.126489 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.626477672 +0000 UTC m=+159.598789213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.227435 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.227568 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.727552103 +0000 UTC m=+159.699863654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.227616 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.227893 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.727885834 +0000 UTC m=+159.700197385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.328255 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.328401 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.828382206 +0000 UTC m=+159.800693757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.328621 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.329148 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.82913728 +0000 UTC m=+159.801448831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.429260 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.429468 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.929440697 +0000 UTC m=+159.901752258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.429606 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.429903 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.929890761 +0000 UTC m=+159.902202312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.520710 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:34 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:34 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:34 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.520767 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.530414 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.530628 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.030613551 +0000 UTC m=+160.002925102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.548979 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" event={"ID":"24c14c8a-2e57-452b-b70b-646c1e2bac06","Type":"ContainerStarted","Data":"c197a1884b2b080ce4fb7425f6862d556aee1a9da27a44b8623a4fc1fa6bc934"} Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.631829 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.632438 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.132425444 +0000 UTC m=+160.104736985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.634346 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.652641 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.735060 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.736006 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.235989583 +0000 UTC m=+160.208301134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.836392 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.837436 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.337425376 +0000 UTC m=+160.309736927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.866223 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 08:27:33 +0000 UTC, rotation deadline is 2026-12-05 02:51:06.848250898 +0000 UTC Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.866260 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7410h18m31.981994275s for next certificate rotation Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.937160 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.937345 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.437315019 +0000 UTC m=+160.409626580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.038254 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.038547 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.538533225 +0000 UTC m=+160.510844776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.139240 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.139513 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.639498072 +0000 UTC m=+160.611809623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.240449 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.240926 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.740909044 +0000 UTC m=+160.713220585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.243668 4758 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.341455 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.341636 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.841611953 +0000 UTC m=+160.813923504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.342105 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.342421 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.842408068 +0000 UTC m=+160.814719619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.443163 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.443382 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.943362275 +0000 UTC m=+160.915673836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.443784 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.444115 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.944104658 +0000 UTC m=+160.916416209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.474740 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.521977 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:35 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:35 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:35 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.522303 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.544879 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.545174 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:36.045158368 +0000 UTC m=+161.017469919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.545452 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.545888 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:36.045872411 +0000 UTC m=+161.018183962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.556158 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" event={"ID":"24c14c8a-2e57-452b-b70b-646c1e2bac06","Type":"ContainerStarted","Data":"c739a4289cc899bcfb2c576273550aa7de620d41a114a199724648406e243a1a"} Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.556432 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" event={"ID":"24c14c8a-2e57-452b-b70b-646c1e2bac06","Type":"ContainerStarted","Data":"591c8c89a920896f268a3c05082b81165474b1e0e6c84907c97a3efbebe19713"} Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.560565 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.560603 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.561557 4758 patch_prober.go:28] interesting pod/console-f9d7485db-rxgh6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.561585 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rxgh6" podUID="df190322-1e43-4ae4-ac74-78702c913801" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.602483 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xpchq"] Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.603423 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.605672 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.648180 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.648467 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:36.148451799 +0000 UTC m=+161.120763350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.649276 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:36.149257234 +0000 UTC m=+161.121568785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.649889 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.650025 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-catalog-content\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.650278 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvflw\" (UniqueName: \"kubernetes.io/projected/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-kube-api-access-tvflw\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.650671 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-utilities\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.656021 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" podStartSLOduration=11.656004117 podStartE2EDuration="11.656004117s" podCreationTimestamp="2026-01-30 08:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:35.636596675 +0000 UTC m=+160.608908226" watchObservedRunningTime="2026-01-30 08:32:35.656004117 +0000 UTC m=+160.628315658" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.675261 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xpchq"] Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.752645 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.752969 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:36.252937097 +0000 UTC m=+161.225248648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.754024 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-utilities\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.754574 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.754495 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-utilities\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.755105 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:36.255040093 +0000 UTC m=+161.227351644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.755497 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-catalog-content\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.755606 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvflw\" (UniqueName: \"kubernetes.io/projected/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-kube-api-access-tvflw\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.756100 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-catalog-content\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.802325 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fvrs2"] Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.803548 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.805959 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvflw\" (UniqueName: \"kubernetes.io/projected/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-kube-api-access-tvflw\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.809542 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.835309 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fvrs2"] Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.857328 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.857591 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76w9h\" (UniqueName: \"kubernetes.io/projected/ee1dd099-7697-43e1-9777-b3735c8013e8-kube-api-access-76w9h\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.857634 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-utilities\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.857669 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-catalog-content\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.858160 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:36.358133757 +0000 UTC m=+161.330445308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.918764 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.946389 4758 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T08:32:35.243687671Z","Handler":null,"Name":""} Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.960757 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76w9h\" (UniqueName: \"kubernetes.io/projected/ee1dd099-7697-43e1-9777-b3735c8013e8-kube-api-access-76w9h\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.961075 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-utilities\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.961108 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.961128 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-catalog-content\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.961549 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-catalog-content\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.961977 4758 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.962110 4758 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.961988 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-utilities\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.962016 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:36.462005716 +0000 UTC m=+161.434317267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.988216 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9h5f5"] Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.990020 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.003792 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76w9h\" (UniqueName: \"kubernetes.io/projected/ee1dd099-7697-43e1-9777-b3735c8013e8-kube-api-access-76w9h\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.012721 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9h5f5"] Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.062445 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.062826 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lmb2\" (UniqueName: \"kubernetes.io/projected/34d332c4-fa91-4d24-9561-1b68c12a8224-kube-api-access-5lmb2\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.062970 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-utilities\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.063078 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-catalog-content\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.115255 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.148615 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.166748 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-utilities\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.166790 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.166811 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-catalog-content\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.166844 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lmb2\" (UniqueName: \"kubernetes.io/projected/34d332c4-fa91-4d24-9561-1b68c12a8224-kube-api-access-5lmb2\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.167529 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-utilities\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.167984 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-catalog-content\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.189627 4758 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.189660 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.216112 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6vchq"] Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.217114 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.220516 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lmb2\" (UniqueName: \"kubernetes.io/projected/34d332c4-fa91-4d24-9561-1b68c12a8224-kube-api-access-5lmb2\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.245447 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6vchq"] Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.268667 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-utilities\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.268717 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-catalog-content\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.268742 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9dg9\" (UniqueName: \"kubernetes.io/projected/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-kube-api-access-n9dg9\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.330463 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.336350 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xpchq"] Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.349215 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.369773 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9dg9\" (UniqueName: \"kubernetes.io/projected/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-kube-api-access-n9dg9\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.369867 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-utilities\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.369890 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-catalog-content\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.370286 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-utilities\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.370484 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-catalog-content\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.395884 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9dg9\" (UniqueName: \"kubernetes.io/projected/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-kube-api-access-n9dg9\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: W0130 08:32:36.397754 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad53cc1a_5ef4_4e05_a996_b8e53194ef37.slice/crio-7b1a949affe965a07631839d4c9eae6592dd927e8832c885c5dddac995cd6027 WatchSource:0}: Error finding container 7b1a949affe965a07631839d4c9eae6592dd927e8832c885c5dddac995cd6027: Status 404 returned error can't find the container with id 7b1a949affe965a07631839d4c9eae6592dd927e8832c885c5dddac995cd6027 Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.416984 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.417023 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.422864 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.438387 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.498059 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-zp6d8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.498096 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zp6d8" podUID="b0313d23-ff69-4957-ad3b-d6adc246aad5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.498489 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-zp6d8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.498506 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zp6d8" podUID="b0313d23-ff69-4957-ad3b-d6adc246aad5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.520466 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.526358 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:36 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:36 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:36 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.526407 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.550941 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.593151 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpchq" event={"ID":"ad53cc1a-5ef4-4e05-a996-b8e53194ef37","Type":"ContainerStarted","Data":"7b1a949affe965a07631839d4c9eae6592dd927e8832c885c5dddac995cd6027"} Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.600557 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fvrs2"] Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.613321 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.815559 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.816983 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.841121 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.844276 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.868687 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.869522 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9h5f5"] Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.887737 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4457cdf7-9bd2-43e6-8476-22076ded6dce-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4457cdf7-9bd2-43e6-8476-22076ded6dce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.887808 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4457cdf7-9bd2-43e6-8476-22076ded6dce-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4457cdf7-9bd2-43e6-8476-22076ded6dce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.923449 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.992354 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4457cdf7-9bd2-43e6-8476-22076ded6dce-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4457cdf7-9bd2-43e6-8476-22076ded6dce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.992406 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4457cdf7-9bd2-43e6-8476-22076ded6dce-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4457cdf7-9bd2-43e6-8476-22076ded6dce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.993573 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4457cdf7-9bd2-43e6-8476-22076ded6dce-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4457cdf7-9bd2-43e6-8476-22076ded6dce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.053295 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4457cdf7-9bd2-43e6-8476-22076ded6dce-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4457cdf7-9bd2-43e6-8476-22076ded6dce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.171451 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fd88w"] Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.201080 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.409486 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6vchq"] Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.515483 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.516463 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.541085 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.541180 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:37 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:37 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:37 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.541219 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.541417 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.542761 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.587375 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x78bc"] Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.588331 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.606757 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.618343 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.622392 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x78bc"] Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.639177 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-catalog-content\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.639280 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-utilities\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.639326 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdh94\" (UniqueName: \"kubernetes.io/projected/a8160a87-6f56-4d61-a17b-8049588a293b-kube-api-access-bdh94\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.639457 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ee9b4bc-c91f-454e-b176-411e75de16c8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7ee9b4bc-c91f-454e-b176-411e75de16c8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.639506 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ee9b4bc-c91f-454e-b176-411e75de16c8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7ee9b4bc-c91f-454e-b176-411e75de16c8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.650180 4758 generic.go:334] "Generic (PLEG): container finished" podID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerID="d0f286c054cafb8d5e4a578a3edfde29dd4ca6bca803278ecb39d974e67cc4ab" exitCode=0 Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.650921 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h5f5" event={"ID":"34d332c4-fa91-4d24-9561-1b68c12a8224","Type":"ContainerDied","Data":"d0f286c054cafb8d5e4a578a3edfde29dd4ca6bca803278ecb39d974e67cc4ab"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.650949 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h5f5" event={"ID":"34d332c4-fa91-4d24-9561-1b68c12a8224","Type":"ContainerStarted","Data":"40c380f8967105b9213cda664c7b3c8de52ef7813797c018b751e70a571fe4ad"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.655970 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.668676 4758 generic.go:334] "Generic (PLEG): container finished" podID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerID="dab894bc290593f6bf4efa3b73a1fcf4864265047195960150260493f5158b78" exitCode=0 Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.668762 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpchq" event={"ID":"ad53cc1a-5ef4-4e05-a996-b8e53194ef37","Type":"ContainerDied","Data":"dab894bc290593f6bf4efa3b73a1fcf4864265047195960150260493f5158b78"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.671313 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vchq" event={"ID":"83ea0fe7-14a8-4194-abfe-dfc8634b8acf","Type":"ContainerStarted","Data":"c0f04b21f482623d1676f95773767fabe33d02396d39eb436e9aa555e26ff568"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.671353 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vchq" event={"ID":"83ea0fe7-14a8-4194-abfe-dfc8634b8acf","Type":"ContainerStarted","Data":"8066581e913c0a6adf8368222fb0eb4020a0da93b298f0c82e13a38f877da9d3"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.677981 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" event={"ID":"467f100f-83e4-43b0-bcf0-16cfe7cb0393","Type":"ContainerStarted","Data":"1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.678013 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" event={"ID":"467f100f-83e4-43b0-bcf0-16cfe7cb0393","Type":"ContainerStarted","Data":"cf6f00a97d6458a9fc656f00daeea6735f190156c2f34abea4396149d2c34aee"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.678476 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.697551 4758 generic.go:334] "Generic (PLEG): container finished" podID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerID="4fd34a8816ab9b8ac36f3ee7f4fccd351c108f7b629f6428e022613e9b4c1cfd" exitCode=0 Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.698475 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvrs2" event={"ID":"ee1dd099-7697-43e1-9777-b3735c8013e8","Type":"ContainerDied","Data":"4fd34a8816ab9b8ac36f3ee7f4fccd351c108f7b629f6428e022613e9b4c1cfd"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.698500 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvrs2" event={"ID":"ee1dd099-7697-43e1-9777-b3735c8013e8","Type":"ContainerStarted","Data":"0bb05ce1b23a66a7b1423906b5cec1cc9cdc27900f23c97851de922a49efcf1f"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.740686 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ee9b4bc-c91f-454e-b176-411e75de16c8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7ee9b4bc-c91f-454e-b176-411e75de16c8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.740797 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-catalog-content\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.740837 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-utilities\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.740857 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdh94\" (UniqueName: \"kubernetes.io/projected/a8160a87-6f56-4d61-a17b-8049588a293b-kube-api-access-bdh94\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.740968 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ee9b4bc-c91f-454e-b176-411e75de16c8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7ee9b4bc-c91f-454e-b176-411e75de16c8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.741344 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ee9b4bc-c91f-454e-b176-411e75de16c8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7ee9b4bc-c91f-454e-b176-411e75de16c8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.752540 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-catalog-content\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.753159 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-utilities\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.774699 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ee9b4bc-c91f-454e-b176-411e75de16c8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7ee9b4bc-c91f-454e-b176-411e75de16c8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.783244 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdh94\" (UniqueName: \"kubernetes.io/projected/a8160a87-6f56-4d61-a17b-8049588a293b-kube-api-access-bdh94\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.800233 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.819622 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" podStartSLOduration=142.819606826 podStartE2EDuration="2m22.819606826s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:37.789000523 +0000 UTC m=+162.761312074" watchObservedRunningTime="2026-01-30 08:32:37.819606826 +0000 UTC m=+162.791918377" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.863615 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.934575 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.985665 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bc5l2"] Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.986874 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.010009 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bc5l2"] Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.044542 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-catalog-content\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.044625 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb7gf\" (UniqueName: \"kubernetes.io/projected/3a74af45-ed4e-4d30-b686-942663e223c6-kube-api-access-xb7gf\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.044647 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-utilities\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.145856 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-catalog-content\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.145902 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.145953 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb7gf\" (UniqueName: \"kubernetes.io/projected/3a74af45-ed4e-4d30-b686-942663e223c6-kube-api-access-xb7gf\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.145970 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-utilities\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.146421 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-utilities\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.146657 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-catalog-content\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.152681 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.183500 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.198091 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb7gf\" (UniqueName: \"kubernetes.io/projected/3a74af45-ed4e-4d30-b686-942663e223c6-kube-api-access-xb7gf\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.339262 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.527501 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.537369 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.697226 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x78bc"] Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.770288 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4457cdf7-9bd2-43e6-8476-22076ded6dce","Type":"ContainerStarted","Data":"75f9050b0da8bff9b8eb3eabaa32d848410f146637e7bd947ed0cbfdcd3500bd"} Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.770341 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4457cdf7-9bd2-43e6-8476-22076ded6dce","Type":"ContainerStarted","Data":"4afe439e516b8e3dd4a34e3ae337b9ee56dd97e1d9906a6c1c080bbcf4203875"} Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.785002 4758 generic.go:334] "Generic (PLEG): container finished" podID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerID="c0f04b21f482623d1676f95773767fabe33d02396d39eb436e9aa555e26ff568" exitCode=0 Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.785663 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vchq" event={"ID":"83ea0fe7-14a8-4194-abfe-dfc8634b8acf","Type":"ContainerDied","Data":"c0f04b21f482623d1676f95773767fabe33d02396d39eb436e9aa555e26ff568"} Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.879699 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.879678747 podStartE2EDuration="2.879678747s" podCreationTimestamp="2026-01-30 08:32:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:38.817719707 +0000 UTC m=+163.790031268" watchObservedRunningTime="2026-01-30 08:32:38.879678747 +0000 UTC m=+163.851990298" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.882931 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.039365 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zqdsb"] Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.040421 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.044707 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.049774 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqdsb"] Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.085137 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz8l2\" (UniqueName: \"kubernetes.io/projected/9959a40e-acf9-4c57-967c-bbd102964dbb-kube-api-access-nz8l2\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.085199 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-catalog-content\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.085224 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-utilities\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.199155 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz8l2\" (UniqueName: \"kubernetes.io/projected/9959a40e-acf9-4c57-967c-bbd102964dbb-kube-api-access-nz8l2\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.199599 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-catalog-content\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.199624 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-utilities\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.200261 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-utilities\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.200376 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-catalog-content\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.249563 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz8l2\" (UniqueName: \"kubernetes.io/projected/9959a40e-acf9-4c57-967c-bbd102964dbb-kube-api-access-nz8l2\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.385569 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w9whf"] Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.386644 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.396202 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bc5l2"] Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.406528 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w9whf"] Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.443102 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.492047 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gj6b4"] Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.516871 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-utilities\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.516938 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glx9s\" (UniqueName: \"kubernetes.io/projected/498010c8-fcda-4462-864d-88d7f70c2d54-kube-api-access-glx9s\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.517012 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-catalog-content\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.617808 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glx9s\" (UniqueName: \"kubernetes.io/projected/498010c8-fcda-4462-864d-88d7f70c2d54-kube-api-access-glx9s\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.618308 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-catalog-content\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.618355 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-utilities\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.618807 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-catalog-content\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.619095 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-utilities\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.647738 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glx9s\" (UniqueName: \"kubernetes.io/projected/498010c8-fcda-4462-864d-88d7f70c2d54-kube-api-access-glx9s\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.727939 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.920412 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" event={"ID":"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4","Type":"ContainerStarted","Data":"f0b4a85eb0f41a9b2c411c4e5a860316719ec21a0a5965812a89ce9fda0feac2"} Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.940432 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7ee9b4bc-c91f-454e-b176-411e75de16c8","Type":"ContainerStarted","Data":"9a4bb59357a96ef8f4c954872cd23d4914070e5b4e01845ca5e858c6cfe1ea2f"} Jan 30 08:32:40 crc kubenswrapper[4758]: I0130 08:32:40.028310 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bc5l2" event={"ID":"3a74af45-ed4e-4d30-b686-942663e223c6","Type":"ContainerStarted","Data":"1dd118451b6794cd44f2978fc2e3be0c8174873ed3e0e0c9a87ba8e5481221b7"} Jan 30 08:32:40 crc kubenswrapper[4758]: I0130 08:32:40.034661 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x78bc" event={"ID":"a8160a87-6f56-4d61-a17b-8049588a293b","Type":"ContainerStarted","Data":"15ee256dfea5816a52c886c11283e7357d33ef2c840a69e7c266baa52c35e543"} Jan 30 08:32:40 crc kubenswrapper[4758]: I0130 08:32:40.034695 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x78bc" event={"ID":"a8160a87-6f56-4d61-a17b-8049588a293b","Type":"ContainerStarted","Data":"b94b087d70a109782aecba154dc0988599437def55360286dd53d57c7243892b"} Jan 30 08:32:40 crc kubenswrapper[4758]: I0130 08:32:40.154742 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqdsb"] Jan 30 08:32:40 crc kubenswrapper[4758]: I0130 08:32:40.501414 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w9whf"] Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.096777 4758 generic.go:334] "Generic (PLEG): container finished" podID="498010c8-fcda-4462-864d-88d7f70c2d54" containerID="500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336" exitCode=0 Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.096829 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9whf" event={"ID":"498010c8-fcda-4462-864d-88d7f70c2d54","Type":"ContainerDied","Data":"500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.096852 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9whf" event={"ID":"498010c8-fcda-4462-864d-88d7f70c2d54","Type":"ContainerStarted","Data":"3ca3f01b986f56ea3541c35403f896246a5783f95ef8ac463437f4a2145f8bd8"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.129687 4758 generic.go:334] "Generic (PLEG): container finished" podID="7ee9b4bc-c91f-454e-b176-411e75de16c8" containerID="299e4a55ae6a6d9cdee83282fd6681484abb006c9d4b5da01dca9cc302d06082" exitCode=0 Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.130029 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7ee9b4bc-c91f-454e-b176-411e75de16c8","Type":"ContainerDied","Data":"299e4a55ae6a6d9cdee83282fd6681484abb006c9d4b5da01dca9cc302d06082"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.138508 4758 generic.go:334] "Generic (PLEG): container finished" podID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerID="08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601" exitCode=0 Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.138565 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqdsb" event={"ID":"9959a40e-acf9-4c57-967c-bbd102964dbb","Type":"ContainerDied","Data":"08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.138591 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqdsb" event={"ID":"9959a40e-acf9-4c57-967c-bbd102964dbb","Type":"ContainerStarted","Data":"2d31c2681539f71b539b22493850f84cbd075530d6103b3a81f2ee6e83bc8595"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.153174 4758 generic.go:334] "Generic (PLEG): container finished" podID="a8160a87-6f56-4d61-a17b-8049588a293b" containerID="15ee256dfea5816a52c886c11283e7357d33ef2c840a69e7c266baa52c35e543" exitCode=0 Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.153251 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x78bc" event={"ID":"a8160a87-6f56-4d61-a17b-8049588a293b","Type":"ContainerDied","Data":"15ee256dfea5816a52c886c11283e7357d33ef2c840a69e7c266baa52c35e543"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.196990 4758 generic.go:334] "Generic (PLEG): container finished" podID="9313ed67-0218-4d32-adf7-710ba67de622" containerID="166bda0445ccb2dd7e9715331b62d1e0995dd20e40dce82e327a772dd4e0caf9" exitCode=0 Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.197110 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" event={"ID":"9313ed67-0218-4d32-adf7-710ba67de622","Type":"ContainerDied","Data":"166bda0445ccb2dd7e9715331b62d1e0995dd20e40dce82e327a772dd4e0caf9"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.279282 4758 generic.go:334] "Generic (PLEG): container finished" podID="3a74af45-ed4e-4d30-b686-942663e223c6" containerID="9f60c8fda6e7e66a9633ce66ac886aa456dd7fafe83d70de6130a7dc9a89f0d9" exitCode=0 Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.279352 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bc5l2" event={"ID":"3a74af45-ed4e-4d30-b686-942663e223c6","Type":"ContainerDied","Data":"9f60c8fda6e7e66a9633ce66ac886aa456dd7fafe83d70de6130a7dc9a89f0d9"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.290964 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" event={"ID":"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4","Type":"ContainerStarted","Data":"79d424bc37f43fcefbc74bf7e2029f041d2346c19453955c5f2bf28b84c0904f"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.346784 4758 generic.go:334] "Generic (PLEG): container finished" podID="4457cdf7-9bd2-43e6-8476-22076ded6dce" containerID="75f9050b0da8bff9b8eb3eabaa32d848410f146637e7bd947ed0cbfdcd3500bd" exitCode=0 Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.346830 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4457cdf7-9bd2-43e6-8476-22076ded6dce","Type":"ContainerDied","Data":"75f9050b0da8bff9b8eb3eabaa32d848410f146637e7bd947ed0cbfdcd3500bd"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.721647 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:42 crc kubenswrapper[4758]: I0130 08:32:42.423964 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" event={"ID":"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4","Type":"ContainerStarted","Data":"ab783e3ce5ee2a8913d69afb7c2d2b29f1e6ea0e5cdd7ecf1e01d9153d273e02"} Jan 30 08:32:42 crc kubenswrapper[4758]: I0130 08:32:42.450672 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gj6b4" podStartSLOduration=147.450629996 podStartE2EDuration="2m27.450629996s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:42.443131079 +0000 UTC m=+167.415442630" watchObservedRunningTime="2026-01-30 08:32:42.450629996 +0000 UTC m=+167.422941547" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.082957 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.123510 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.138059 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4457cdf7-9bd2-43e6-8476-22076ded6dce-kube-api-access\") pod \"4457cdf7-9bd2-43e6-8476-22076ded6dce\" (UID: \"4457cdf7-9bd2-43e6-8476-22076ded6dce\") " Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.138132 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4457cdf7-9bd2-43e6-8476-22076ded6dce-kubelet-dir\") pod \"4457cdf7-9bd2-43e6-8476-22076ded6dce\" (UID: \"4457cdf7-9bd2-43e6-8476-22076ded6dce\") " Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.138604 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4457cdf7-9bd2-43e6-8476-22076ded6dce-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4457cdf7-9bd2-43e6-8476-22076ded6dce" (UID: "4457cdf7-9bd2-43e6-8476-22076ded6dce"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.145292 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4457cdf7-9bd2-43e6-8476-22076ded6dce-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4457cdf7-9bd2-43e6-8476-22076ded6dce" (UID: "4457cdf7-9bd2-43e6-8476-22076ded6dce"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.175119 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.245513 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8frm7\" (UniqueName: \"kubernetes.io/projected/9313ed67-0218-4d32-adf7-710ba67de622-kube-api-access-8frm7\") pod \"9313ed67-0218-4d32-adf7-710ba67de622\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.245618 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9313ed67-0218-4d32-adf7-710ba67de622-config-volume\") pod \"9313ed67-0218-4d32-adf7-710ba67de622\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.245646 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9313ed67-0218-4d32-adf7-710ba67de622-secret-volume\") pod \"9313ed67-0218-4d32-adf7-710ba67de622\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.245660 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ee9b4bc-c91f-454e-b176-411e75de16c8-kubelet-dir\") pod \"7ee9b4bc-c91f-454e-b176-411e75de16c8\" (UID: \"7ee9b4bc-c91f-454e-b176-411e75de16c8\") " Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.245683 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ee9b4bc-c91f-454e-b176-411e75de16c8-kube-api-access\") pod \"7ee9b4bc-c91f-454e-b176-411e75de16c8\" (UID: \"7ee9b4bc-c91f-454e-b176-411e75de16c8\") " Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.245932 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4457cdf7-9bd2-43e6-8476-22076ded6dce-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.245948 4758 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4457cdf7-9bd2-43e6-8476-22076ded6dce-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.247453 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9313ed67-0218-4d32-adf7-710ba67de622-config-volume" (OuterVolumeSpecName: "config-volume") pod "9313ed67-0218-4d32-adf7-710ba67de622" (UID: "9313ed67-0218-4d32-adf7-710ba67de622"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.248597 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee9b4bc-c91f-454e-b176-411e75de16c8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7ee9b4bc-c91f-454e-b176-411e75de16c8" (UID: "7ee9b4bc-c91f-454e-b176-411e75de16c8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.251669 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ee9b4bc-c91f-454e-b176-411e75de16c8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7ee9b4bc-c91f-454e-b176-411e75de16c8" (UID: "7ee9b4bc-c91f-454e-b176-411e75de16c8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.253319 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9313ed67-0218-4d32-adf7-710ba67de622-kube-api-access-8frm7" (OuterVolumeSpecName: "kube-api-access-8frm7") pod "9313ed67-0218-4d32-adf7-710ba67de622" (UID: "9313ed67-0218-4d32-adf7-710ba67de622"). InnerVolumeSpecName "kube-api-access-8frm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.254770 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9313ed67-0218-4d32-adf7-710ba67de622-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9313ed67-0218-4d32-adf7-710ba67de622" (UID: "9313ed67-0218-4d32-adf7-710ba67de622"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.347479 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8frm7\" (UniqueName: \"kubernetes.io/projected/9313ed67-0218-4d32-adf7-710ba67de622-kube-api-access-8frm7\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.347509 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9313ed67-0218-4d32-adf7-710ba67de622-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.347518 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9313ed67-0218-4d32-adf7-710ba67de622-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.347527 4758 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ee9b4bc-c91f-454e-b176-411e75de16c8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.347536 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ee9b4bc-c91f-454e-b176-411e75de16c8-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.474946 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4457cdf7-9bd2-43e6-8476-22076ded6dce","Type":"ContainerDied","Data":"4afe439e516b8e3dd4a34e3ae337b9ee56dd97e1d9906a6c1c080bbcf4203875"} Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.474989 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4afe439e516b8e3dd4a34e3ae337b9ee56dd97e1d9906a6c1c080bbcf4203875" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.475069 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.484680 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7ee9b4bc-c91f-454e-b176-411e75de16c8","Type":"ContainerDied","Data":"9a4bb59357a96ef8f4c954872cd23d4914070e5b4e01845ca5e858c6cfe1ea2f"} Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.484821 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a4bb59357a96ef8f4c954872cd23d4914070e5b4e01845ca5e858c6cfe1ea2f" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.484878 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.537617 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.543863 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" event={"ID":"9313ed67-0218-4d32-adf7-710ba67de622","Type":"ContainerDied","Data":"b859f510e54f07bd33d4310127d5c0dd60f6d82ea983d54ae89524877d89e31a"} Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.543929 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b859f510e54f07bd33d4310127d5c0dd60f6d82ea983d54ae89524877d89e31a" Jan 30 08:32:45 crc kubenswrapper[4758]: I0130 08:32:45.569967 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:45 crc kubenswrapper[4758]: I0130 08:32:45.574432 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:46 crc kubenswrapper[4758]: I0130 08:32:46.497444 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-zp6d8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 30 08:32:46 crc kubenswrapper[4758]: I0130 08:32:46.497768 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zp6d8" podUID="b0313d23-ff69-4957-ad3b-d6adc246aad5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 30 08:32:46 crc kubenswrapper[4758]: I0130 08:32:46.497520 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-zp6d8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 30 08:32:46 crc kubenswrapper[4758]: I0130 08:32:46.497890 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zp6d8" podUID="b0313d23-ff69-4957-ad3b-d6adc246aad5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 30 08:32:52 crc kubenswrapper[4758]: I0130 08:32:52.387816 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:32:52 crc kubenswrapper[4758]: I0130 08:32:52.388671 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:32:56 crc kubenswrapper[4758]: I0130 08:32:56.432339 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:56 crc kubenswrapper[4758]: I0130 08:32:56.535467 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-zp6d8" Jan 30 08:33:03 crc kubenswrapper[4758]: I0130 08:33:03.003254 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:33:06 crc kubenswrapper[4758]: I0130 08:33:06.950158 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:33:14 crc kubenswrapper[4758]: E0130 08:33:14.034949 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 08:33:14 crc kubenswrapper[4758]: E0130 08:33:14.035669 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdh94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-x78bc_openshift-marketplace(a8160a87-6f56-4d61-a17b-8049588a293b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 08:33:14 crc kubenswrapper[4758]: E0130 08:33:14.037960 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-x78bc" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.706229 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 08:33:14 crc kubenswrapper[4758]: E0130 08:33:14.706709 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee9b4bc-c91f-454e-b176-411e75de16c8" containerName="pruner" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.706726 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee9b4bc-c91f-454e-b176-411e75de16c8" containerName="pruner" Jan 30 08:33:14 crc kubenswrapper[4758]: E0130 08:33:14.706738 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9313ed67-0218-4d32-adf7-710ba67de622" containerName="collect-profiles" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.706745 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9313ed67-0218-4d32-adf7-710ba67de622" containerName="collect-profiles" Jan 30 08:33:14 crc kubenswrapper[4758]: E0130 08:33:14.706760 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4457cdf7-9bd2-43e6-8476-22076ded6dce" containerName="pruner" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.706766 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="4457cdf7-9bd2-43e6-8476-22076ded6dce" containerName="pruner" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.706859 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9313ed67-0218-4d32-adf7-710ba67de622" containerName="collect-profiles" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.706867 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee9b4bc-c91f-454e-b176-411e75de16c8" containerName="pruner" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.706877 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="4457cdf7-9bd2-43e6-8476-22076ded6dce" containerName="pruner" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.707223 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.709768 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.711090 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.720187 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.849257 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h5f5" event={"ID":"34d332c4-fa91-4d24-9561-1b68c12a8224","Type":"ContainerStarted","Data":"007406a2a74633f2e162b319a26aced276d646abac7a003472befb9eada34dd7"} Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.854687 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpchq" event={"ID":"ad53cc1a-5ef4-4e05-a996-b8e53194ef37","Type":"ContainerStarted","Data":"b5bdd4b46ac12fea35ffe9c6fead48f31f038e946902c61ae00ea7909af021e0"} Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.861737 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vchq" event={"ID":"83ea0fe7-14a8-4194-abfe-dfc8634b8acf","Type":"ContainerStarted","Data":"9b438ecbdf2666ed95996153c185f9d9cd9dfced1f243e09c5abfee072414795"} Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.873206 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqdsb" event={"ID":"9959a40e-acf9-4c57-967c-bbd102964dbb","Type":"ContainerStarted","Data":"fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705"} Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.881907 4758 generic.go:334] "Generic (PLEG): container finished" podID="3a74af45-ed4e-4d30-b686-942663e223c6" containerID="135ca0181a7c33753604da0d83c5eecfea5b146962a0bc2ffbf124c120de0fb0" exitCode=0 Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.881981 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bc5l2" event={"ID":"3a74af45-ed4e-4d30-b686-942663e223c6","Type":"ContainerDied","Data":"135ca0181a7c33753604da0d83c5eecfea5b146962a0bc2ffbf124c120de0fb0"} Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.888398 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.888487 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.888524 4758 generic.go:334] "Generic (PLEG): container finished" podID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerID="c3f68acbef392b308c2ed3b35e3acc112673547e011fad5471bd97a0db59032a" exitCode=0 Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.888574 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvrs2" event={"ID":"ee1dd099-7697-43e1-9777-b3735c8013e8","Type":"ContainerDied","Data":"c3f68acbef392b308c2ed3b35e3acc112673547e011fad5471bd97a0db59032a"} Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.895815 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9whf" event={"ID":"498010c8-fcda-4462-864d-88d7f70c2d54","Type":"ContainerStarted","Data":"ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0"} Jan 30 08:33:14 crc kubenswrapper[4758]: E0130 08:33:14.912459 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x78bc" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.990299 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.990450 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.990922 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.011946 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.037262 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.433732 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 08:33:15 crc kubenswrapper[4758]: W0130 08:33:15.451239 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod264e5b45_414d_4f1b_b3af_1d9d762ba6b8.slice/crio-1ed401a58b6047053ab565c4559873481cf778bfbac04162ed0b3e76ba29d9cb WatchSource:0}: Error finding container 1ed401a58b6047053ab565c4559873481cf778bfbac04162ed0b3e76ba29d9cb: Status 404 returned error can't find the container with id 1ed401a58b6047053ab565c4559873481cf778bfbac04162ed0b3e76ba29d9cb Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.913232 4758 generic.go:334] "Generic (PLEG): container finished" podID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerID="007406a2a74633f2e162b319a26aced276d646abac7a003472befb9eada34dd7" exitCode=0 Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.913576 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h5f5" event={"ID":"34d332c4-fa91-4d24-9561-1b68c12a8224","Type":"ContainerDied","Data":"007406a2a74633f2e162b319a26aced276d646abac7a003472befb9eada34dd7"} Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.915958 4758 generic.go:334] "Generic (PLEG): container finished" podID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerID="b5bdd4b46ac12fea35ffe9c6fead48f31f038e946902c61ae00ea7909af021e0" exitCode=0 Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.916115 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpchq" event={"ID":"ad53cc1a-5ef4-4e05-a996-b8e53194ef37","Type":"ContainerDied","Data":"b5bdd4b46ac12fea35ffe9c6fead48f31f038e946902c61ae00ea7909af021e0"} Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.917778 4758 generic.go:334] "Generic (PLEG): container finished" podID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerID="9b438ecbdf2666ed95996153c185f9d9cd9dfced1f243e09c5abfee072414795" exitCode=0 Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.917986 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vchq" event={"ID":"83ea0fe7-14a8-4194-abfe-dfc8634b8acf","Type":"ContainerDied","Data":"9b438ecbdf2666ed95996153c185f9d9cd9dfced1f243e09c5abfee072414795"} Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.918816 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"264e5b45-414d-4f1b-b3af-1d9d762ba6b8","Type":"ContainerStarted","Data":"1ed401a58b6047053ab565c4559873481cf778bfbac04162ed0b3e76ba29d9cb"} Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.921295 4758 generic.go:334] "Generic (PLEG): container finished" podID="498010c8-fcda-4462-864d-88d7f70c2d54" containerID="ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0" exitCode=0 Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.921550 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9whf" event={"ID":"498010c8-fcda-4462-864d-88d7f70c2d54","Type":"ContainerDied","Data":"ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0"} Jan 30 08:33:16 crc kubenswrapper[4758]: I0130 08:33:16.927907 4758 generic.go:334] "Generic (PLEG): container finished" podID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerID="fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705" exitCode=0 Jan 30 08:33:16 crc kubenswrapper[4758]: I0130 08:33:16.927989 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqdsb" event={"ID":"9959a40e-acf9-4c57-967c-bbd102964dbb","Type":"ContainerDied","Data":"fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705"} Jan 30 08:33:16 crc kubenswrapper[4758]: I0130 08:33:16.931493 4758 generic.go:334] "Generic (PLEG): container finished" podID="264e5b45-414d-4f1b-b3af-1d9d762ba6b8" containerID="1be38ecc028a8deb5c4ef9c4dc42d8b9e613204cca0933664cd0b32822167804" exitCode=0 Jan 30 08:33:16 crc kubenswrapper[4758]: I0130 08:33:16.931525 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"264e5b45-414d-4f1b-b3af-1d9d762ba6b8","Type":"ContainerDied","Data":"1be38ecc028a8deb5c4ef9c4dc42d8b9e613204cca0933664cd0b32822167804"} Jan 30 08:33:17 crc kubenswrapper[4758]: I0130 08:33:17.938217 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpchq" event={"ID":"ad53cc1a-5ef4-4e05-a996-b8e53194ef37","Type":"ContainerStarted","Data":"0b2ec1128ae0f87aa56272475ed8df830dea7aa074761913f0222440a559a3e5"} Jan 30 08:33:17 crc kubenswrapper[4758]: I0130 08:33:17.964470 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xpchq" podStartSLOduration=4.042059664 podStartE2EDuration="42.964448504s" podCreationTimestamp="2026-01-30 08:32:35 +0000 UTC" firstStartedPulling="2026-01-30 08:32:37.687351814 +0000 UTC m=+162.659663365" lastFinishedPulling="2026-01-30 08:33:16.609740654 +0000 UTC m=+201.582052205" observedRunningTime="2026-01-30 08:33:17.954338489 +0000 UTC m=+202.926650040" watchObservedRunningTime="2026-01-30 08:33:17.964448504 +0000 UTC m=+202.936760055" Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.172260 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.341518 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kubelet-dir\") pod \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\" (UID: \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\") " Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.341631 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kube-api-access\") pod \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\" (UID: \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\") " Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.341660 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "264e5b45-414d-4f1b-b3af-1d9d762ba6b8" (UID: "264e5b45-414d-4f1b-b3af-1d9d762ba6b8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.341991 4758 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.347332 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "264e5b45-414d-4f1b-b3af-1d9d762ba6b8" (UID: "264e5b45-414d-4f1b-b3af-1d9d762ba6b8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.443090 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.953470 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.953702 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"264e5b45-414d-4f1b-b3af-1d9d762ba6b8","Type":"ContainerDied","Data":"1ed401a58b6047053ab565c4559873481cf778bfbac04162ed0b3e76ba29d9cb"} Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.954464 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ed401a58b6047053ab565c4559873481cf778bfbac04162ed0b3e76ba29d9cb" Jan 30 08:33:20 crc kubenswrapper[4758]: I0130 08:33:20.963270 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vchq" event={"ID":"83ea0fe7-14a8-4194-abfe-dfc8634b8acf","Type":"ContainerStarted","Data":"85d9eb90a4febee9431f88cec93f5cc8f121a642e4658b71fc03987881601f5d"} Jan 30 08:33:20 crc kubenswrapper[4758]: I0130 08:33:20.966378 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvrs2" event={"ID":"ee1dd099-7697-43e1-9777-b3735c8013e8","Type":"ContainerStarted","Data":"b5c5b862b9f83415c1d77938c3409a7625341cb7402a46339bcd3712ede79206"} Jan 30 08:33:20 crc kubenswrapper[4758]: I0130 08:33:20.982772 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6vchq" podStartSLOduration=3.118554465 podStartE2EDuration="44.982752587s" podCreationTimestamp="2026-01-30 08:32:36 +0000 UTC" firstStartedPulling="2026-01-30 08:32:37.687711415 +0000 UTC m=+162.660022956" lastFinishedPulling="2026-01-30 08:33:19.551909517 +0000 UTC m=+204.524221078" observedRunningTime="2026-01-30 08:33:20.980671012 +0000 UTC m=+205.952982563" watchObservedRunningTime="2026-01-30 08:33:20.982752587 +0000 UTC m=+205.955064128" Jan 30 08:33:20 crc kubenswrapper[4758]: I0130 08:33:20.998716 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fvrs2" podStartSLOduration=4.890043124 podStartE2EDuration="45.998699635s" podCreationTimestamp="2026-01-30 08:32:35 +0000 UTC" firstStartedPulling="2026-01-30 08:32:37.699538608 +0000 UTC m=+162.671850159" lastFinishedPulling="2026-01-30 08:33:18.808195119 +0000 UTC m=+203.780506670" observedRunningTime="2026-01-30 08:33:20.997307141 +0000 UTC m=+205.969618692" watchObservedRunningTime="2026-01-30 08:33:20.998699635 +0000 UTC m=+205.971011186" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.101297 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 08:33:21 crc kubenswrapper[4758]: E0130 08:33:21.101511 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264e5b45-414d-4f1b-b3af-1d9d762ba6b8" containerName="pruner" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.101523 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="264e5b45-414d-4f1b-b3af-1d9d762ba6b8" containerName="pruner" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.101609 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="264e5b45-414d-4f1b-b3af-1d9d762ba6b8" containerName="pruner" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.101958 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.103533 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.103643 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.116429 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.178362 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-var-lock\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.178444 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05bc5468-8b30-42ea-a229-dba54dddcdaf-kube-api-access\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.178478 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-kubelet-dir\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.281165 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05bc5468-8b30-42ea-a229-dba54dddcdaf-kube-api-access\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.281234 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-kubelet-dir\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.281267 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-var-lock\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.281347 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-var-lock\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.281386 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-kubelet-dir\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.307913 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05bc5468-8b30-42ea-a229-dba54dddcdaf-kube-api-access\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.414822 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.973217 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h5f5" event={"ID":"34d332c4-fa91-4d24-9561-1b68c12a8224","Type":"ContainerStarted","Data":"dedc2972d89437cd8baff742761e8dc72767583333c906fb9083152c08a20624"} Jan 30 08:33:22 crc kubenswrapper[4758]: I0130 08:33:22.015915 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9h5f5" podStartSLOduration=3.483218595 podStartE2EDuration="47.015889769s" podCreationTimestamp="2026-01-30 08:32:35 +0000 UTC" firstStartedPulling="2026-01-30 08:32:37.655643566 +0000 UTC m=+162.627955117" lastFinishedPulling="2026-01-30 08:33:21.18831474 +0000 UTC m=+206.160626291" observedRunningTime="2026-01-30 08:33:22.01146998 +0000 UTC m=+206.983781541" watchObservedRunningTime="2026-01-30 08:33:22.015889769 +0000 UTC m=+206.988201320" Jan 30 08:33:22 crc kubenswrapper[4758]: I0130 08:33:22.387803 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:33:22 crc kubenswrapper[4758]: I0130 08:33:22.388152 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:33:22 crc kubenswrapper[4758]: I0130 08:33:22.388195 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:33:22 crc kubenswrapper[4758]: I0130 08:33:22.388704 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:33:22 crc kubenswrapper[4758]: I0130 08:33:22.388797 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916" gracePeriod=600 Jan 30 08:33:22 crc kubenswrapper[4758]: I0130 08:33:22.979380 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916" exitCode=0 Jan 30 08:33:22 crc kubenswrapper[4758]: I0130 08:33:22.980112 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916"} Jan 30 08:33:24 crc kubenswrapper[4758]: I0130 08:33:24.199834 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 08:33:24 crc kubenswrapper[4758]: W0130 08:33:24.201959 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod05bc5468_8b30_42ea_a229_dba54dddcdaf.slice/crio-f8494fe1b7bf7fd3655a15c8a2966956d30c2d2ed7f835a5e513367b2ae3ac68 WatchSource:0}: Error finding container f8494fe1b7bf7fd3655a15c8a2966956d30c2d2ed7f835a5e513367b2ae3ac68: Status 404 returned error can't find the container with id f8494fe1b7bf7fd3655a15c8a2966956d30c2d2ed7f835a5e513367b2ae3ac68 Jan 30 08:33:24 crc kubenswrapper[4758]: I0130 08:33:24.990660 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"05bc5468-8b30-42ea-a229-dba54dddcdaf","Type":"ContainerStarted","Data":"f8494fe1b7bf7fd3655a15c8a2966956d30c2d2ed7f835a5e513367b2ae3ac68"} Jan 30 08:33:25 crc kubenswrapper[4758]: I0130 08:33:25.165881 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ssdl5"] Jan 30 08:33:25 crc kubenswrapper[4758]: I0130 08:33:25.918949 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:33:25 crc kubenswrapper[4758]: I0130 08:33:25.924157 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.005375 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqdsb" event={"ID":"9959a40e-acf9-4c57-967c-bbd102964dbb","Type":"ContainerStarted","Data":"f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929"} Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.010977 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"6250826afc1a7b9f8792f6e6a0fcf6f685f39c7f2ff42574fdb4511420aeb1d1"} Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.013591 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bc5l2" event={"ID":"3a74af45-ed4e-4d30-b686-942663e223c6","Type":"ContainerStarted","Data":"bf1daaf06aa1582152a53746198f0e79dd8b28ceb6882633260dce64e46fe554"} Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.015649 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"05bc5468-8b30-42ea-a229-dba54dddcdaf","Type":"ContainerStarted","Data":"680d53edcf23e988cb0b58dfc2997e729a64c4eace90175659a7caa821078c76"} Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.018120 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9whf" event={"ID":"498010c8-fcda-4462-864d-88d7f70c2d54","Type":"ContainerStarted","Data":"e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2"} Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.060909 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zqdsb" podStartSLOduration=5.677304805 podStartE2EDuration="48.060887833s" podCreationTimestamp="2026-01-30 08:32:38 +0000 UTC" firstStartedPulling="2026-01-30 08:32:41.143133529 +0000 UTC m=+166.115445080" lastFinishedPulling="2026-01-30 08:33:23.526716557 +0000 UTC m=+208.499028108" observedRunningTime="2026-01-30 08:33:26.033432266 +0000 UTC m=+211.005743817" watchObservedRunningTime="2026-01-30 08:33:26.060887833 +0000 UTC m=+211.033199384" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.061565 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bc5l2" podStartSLOduration=6.543524545 podStartE2EDuration="49.061558165s" podCreationTimestamp="2026-01-30 08:32:37 +0000 UTC" firstStartedPulling="2026-01-30 08:32:41.2811067 +0000 UTC m=+166.253418251" lastFinishedPulling="2026-01-30 08:33:23.79914032 +0000 UTC m=+208.771451871" observedRunningTime="2026-01-30 08:33:26.060109589 +0000 UTC m=+211.032421160" watchObservedRunningTime="2026-01-30 08:33:26.061558165 +0000 UTC m=+211.033869716" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.077873 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w9whf" podStartSLOduration=3.8673670959999997 podStartE2EDuration="47.077848993s" podCreationTimestamp="2026-01-30 08:32:39 +0000 UTC" firstStartedPulling="2026-01-30 08:32:41.098457323 +0000 UTC m=+166.070768874" lastFinishedPulling="2026-01-30 08:33:24.30893922 +0000 UTC m=+209.281250771" observedRunningTime="2026-01-30 08:33:26.077071359 +0000 UTC m=+211.049382920" watchObservedRunningTime="2026-01-30 08:33:26.077848993 +0000 UTC m=+211.050160554" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.120957 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.12093824 podStartE2EDuration="5.12093824s" podCreationTimestamp="2026-01-30 08:33:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:33:26.12028758 +0000 UTC m=+211.092599141" watchObservedRunningTime="2026-01-30 08:33:26.12093824 +0000 UTC m=+211.093249791" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.150636 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.151516 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.289894 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.290576 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.331226 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.331509 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.369013 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.551698 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.551924 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.595722 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:33:27 crc kubenswrapper[4758]: I0130 08:33:27.081395 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:33:27 crc kubenswrapper[4758]: I0130 08:33:27.082717 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:33:27 crc kubenswrapper[4758]: I0130 08:33:27.089357 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:33:27 crc kubenswrapper[4758]: I0130 08:33:27.094329 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:33:28 crc kubenswrapper[4758]: I0130 08:33:28.340320 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:33:28 crc kubenswrapper[4758]: I0130 08:33:28.340620 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:33:28 crc kubenswrapper[4758]: I0130 08:33:28.415388 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:33:29 crc kubenswrapper[4758]: I0130 08:33:29.443701 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:33:29 crc kubenswrapper[4758]: I0130 08:33:29.445707 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:33:29 crc kubenswrapper[4758]: I0130 08:33:29.728498 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:33:29 crc kubenswrapper[4758]: I0130 08:33:29.728671 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:33:29 crc kubenswrapper[4758]: I0130 08:33:29.880524 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9h5f5"] Jan 30 08:33:29 crc kubenswrapper[4758]: I0130 08:33:29.881470 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9h5f5" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerName="registry-server" containerID="cri-o://dedc2972d89437cd8baff742761e8dc72767583333c906fb9083152c08a20624" gracePeriod=2 Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.041395 4758 generic.go:334] "Generic (PLEG): container finished" podID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerID="dedc2972d89437cd8baff742761e8dc72767583333c906fb9083152c08a20624" exitCode=0 Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.041450 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h5f5" event={"ID":"34d332c4-fa91-4d24-9561-1b68c12a8224","Type":"ContainerDied","Data":"dedc2972d89437cd8baff742761e8dc72767583333c906fb9083152c08a20624"} Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.043324 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x78bc" event={"ID":"a8160a87-6f56-4d61-a17b-8049588a293b","Type":"ContainerStarted","Data":"b6863769607a53efb0adc05b3d382f626fe18a9026de5f17efe5402505ed63f4"} Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.283927 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.309058 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-utilities\") pod \"34d332c4-fa91-4d24-9561-1b68c12a8224\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.309178 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lmb2\" (UniqueName: \"kubernetes.io/projected/34d332c4-fa91-4d24-9561-1b68c12a8224-kube-api-access-5lmb2\") pod \"34d332c4-fa91-4d24-9561-1b68c12a8224\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.309302 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-catalog-content\") pod \"34d332c4-fa91-4d24-9561-1b68c12a8224\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.309867 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-utilities" (OuterVolumeSpecName: "utilities") pod "34d332c4-fa91-4d24-9561-1b68c12a8224" (UID: "34d332c4-fa91-4d24-9561-1b68c12a8224"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.327325 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34d332c4-fa91-4d24-9561-1b68c12a8224-kube-api-access-5lmb2" (OuterVolumeSpecName: "kube-api-access-5lmb2") pod "34d332c4-fa91-4d24-9561-1b68c12a8224" (UID: "34d332c4-fa91-4d24-9561-1b68c12a8224"). InnerVolumeSpecName "kube-api-access-5lmb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.360674 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34d332c4-fa91-4d24-9561-1b68c12a8224" (UID: "34d332c4-fa91-4d24-9561-1b68c12a8224"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.411317 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.411350 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lmb2\" (UniqueName: \"kubernetes.io/projected/34d332c4-fa91-4d24-9561-1b68c12a8224-kube-api-access-5lmb2\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.411364 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.482864 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6vchq"] Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.483080 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6vchq" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerName="registry-server" containerID="cri-o://85d9eb90a4febee9431f88cec93f5cc8f121a642e4658b71fc03987881601f5d" gracePeriod=2 Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.489908 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zqdsb" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="registry-server" probeResult="failure" output=< Jan 30 08:33:30 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 08:33:30 crc kubenswrapper[4758]: > Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.769834 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w9whf" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="registry-server" probeResult="failure" output=< Jan 30 08:33:30 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 08:33:30 crc kubenswrapper[4758]: > Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.057927 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h5f5" event={"ID":"34d332c4-fa91-4d24-9561-1b68c12a8224","Type":"ContainerDied","Data":"40c380f8967105b9213cda664c7b3c8de52ef7813797c018b751e70a571fe4ad"} Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.058333 4758 scope.go:117] "RemoveContainer" containerID="dedc2972d89437cd8baff742761e8dc72767583333c906fb9083152c08a20624" Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.058019 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.061580 4758 generic.go:334] "Generic (PLEG): container finished" podID="a8160a87-6f56-4d61-a17b-8049588a293b" containerID="b6863769607a53efb0adc05b3d382f626fe18a9026de5f17efe5402505ed63f4" exitCode=0 Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.061614 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x78bc" event={"ID":"a8160a87-6f56-4d61-a17b-8049588a293b","Type":"ContainerDied","Data":"b6863769607a53efb0adc05b3d382f626fe18a9026de5f17efe5402505ed63f4"} Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.083265 4758 scope.go:117] "RemoveContainer" containerID="007406a2a74633f2e162b319a26aced276d646abac7a003472befb9eada34dd7" Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.134321 4758 scope.go:117] "RemoveContainer" containerID="d0f286c054cafb8d5e4a578a3edfde29dd4ca6bca803278ecb39d974e67cc4ab" Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.136072 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9h5f5"] Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.143963 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9h5f5"] Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.774060 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" path="/var/lib/kubelet/pods/34d332c4-fa91-4d24-9561-1b68c12a8224/volumes" Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.069720 4758 generic.go:334] "Generic (PLEG): container finished" podID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerID="85d9eb90a4febee9431f88cec93f5cc8f121a642e4658b71fc03987881601f5d" exitCode=0 Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.069802 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vchq" event={"ID":"83ea0fe7-14a8-4194-abfe-dfc8634b8acf","Type":"ContainerDied","Data":"85d9eb90a4febee9431f88cec93f5cc8f121a642e4658b71fc03987881601f5d"} Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.254239 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.336110 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9dg9\" (UniqueName: \"kubernetes.io/projected/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-kube-api-access-n9dg9\") pod \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.336176 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-catalog-content\") pod \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.336290 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-utilities\") pod \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.337243 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-utilities" (OuterVolumeSpecName: "utilities") pod "83ea0fe7-14a8-4194-abfe-dfc8634b8acf" (UID: "83ea0fe7-14a8-4194-abfe-dfc8634b8acf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.347594 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-kube-api-access-n9dg9" (OuterVolumeSpecName: "kube-api-access-n9dg9") pod "83ea0fe7-14a8-4194-abfe-dfc8634b8acf" (UID: "83ea0fe7-14a8-4194-abfe-dfc8634b8acf"). InnerVolumeSpecName "kube-api-access-n9dg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.395012 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83ea0fe7-14a8-4194-abfe-dfc8634b8acf" (UID: "83ea0fe7-14a8-4194-abfe-dfc8634b8acf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.437594 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.437637 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9dg9\" (UniqueName: \"kubernetes.io/projected/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-kube-api-access-n9dg9\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.437649 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.078884 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x78bc" event={"ID":"a8160a87-6f56-4d61-a17b-8049588a293b","Type":"ContainerStarted","Data":"d35d25a07f88153e84e88d2a6f9ec5fe408fc2f157e85bba35a636862f0e3d16"} Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.081840 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vchq" event={"ID":"83ea0fe7-14a8-4194-abfe-dfc8634b8acf","Type":"ContainerDied","Data":"8066581e913c0a6adf8368222fb0eb4020a0da93b298f0c82e13a38f877da9d3"} Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.081907 4758 scope.go:117] "RemoveContainer" containerID="85d9eb90a4febee9431f88cec93f5cc8f121a642e4658b71fc03987881601f5d" Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.081910 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.096597 4758 scope.go:117] "RemoveContainer" containerID="9b438ecbdf2666ed95996153c185f9d9cd9dfced1f243e09c5abfee072414795" Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.117615 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6vchq"] Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.124875 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6vchq"] Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.133070 4758 scope.go:117] "RemoveContainer" containerID="c0f04b21f482623d1676f95773767fabe33d02396d39eb436e9aa555e26ff568" Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.777297 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" path="/var/lib/kubelet/pods/83ea0fe7-14a8-4194-abfe-dfc8634b8acf/volumes" Jan 30 08:33:34 crc kubenswrapper[4758]: I0130 08:33:34.110594 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x78bc" podStartSLOduration=5.549672587 podStartE2EDuration="57.11056906s" podCreationTimestamp="2026-01-30 08:32:37 +0000 UTC" firstStartedPulling="2026-01-30 08:32:41.185529602 +0000 UTC m=+166.157841153" lastFinishedPulling="2026-01-30 08:33:32.746426065 +0000 UTC m=+217.718737626" observedRunningTime="2026-01-30 08:33:34.107437942 +0000 UTC m=+219.079749493" watchObservedRunningTime="2026-01-30 08:33:34.11056906 +0000 UTC m=+219.082880611" Jan 30 08:33:37 crc kubenswrapper[4758]: I0130 08:33:37.936141 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:33:37 crc kubenswrapper[4758]: I0130 08:33:37.936571 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:33:37 crc kubenswrapper[4758]: I0130 08:33:37.976951 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:33:38 crc kubenswrapper[4758]: I0130 08:33:38.142903 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:33:38 crc kubenswrapper[4758]: I0130 08:33:38.375979 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:33:39 crc kubenswrapper[4758]: I0130 08:33:39.484916 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:33:39 crc kubenswrapper[4758]: I0130 08:33:39.531064 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:33:39 crc kubenswrapper[4758]: I0130 08:33:39.763235 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:33:39 crc kubenswrapper[4758]: I0130 08:33:39.800490 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:33:40 crc kubenswrapper[4758]: I0130 08:33:40.879858 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bc5l2"] Jan 30 08:33:40 crc kubenswrapper[4758]: I0130 08:33:40.880599 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bc5l2" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" containerName="registry-server" containerID="cri-o://bf1daaf06aa1582152a53746198f0e79dd8b28ceb6882633260dce64e46fe554" gracePeriod=2 Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.125455 4758 generic.go:334] "Generic (PLEG): container finished" podID="3a74af45-ed4e-4d30-b686-942663e223c6" containerID="bf1daaf06aa1582152a53746198f0e79dd8b28ceb6882633260dce64e46fe554" exitCode=0 Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.126030 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bc5l2" event={"ID":"3a74af45-ed4e-4d30-b686-942663e223c6","Type":"ContainerDied","Data":"bf1daaf06aa1582152a53746198f0e79dd8b28ceb6882633260dce64e46fe554"} Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.806964 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.907442 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-utilities\") pod \"3a74af45-ed4e-4d30-b686-942663e223c6\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.907530 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb7gf\" (UniqueName: \"kubernetes.io/projected/3a74af45-ed4e-4d30-b686-942663e223c6-kube-api-access-xb7gf\") pod \"3a74af45-ed4e-4d30-b686-942663e223c6\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.907557 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-catalog-content\") pod \"3a74af45-ed4e-4d30-b686-942663e223c6\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.908429 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-utilities" (OuterVolumeSpecName: "utilities") pod "3a74af45-ed4e-4d30-b686-942663e223c6" (UID: "3a74af45-ed4e-4d30-b686-942663e223c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.917253 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a74af45-ed4e-4d30-b686-942663e223c6-kube-api-access-xb7gf" (OuterVolumeSpecName: "kube-api-access-xb7gf") pod "3a74af45-ed4e-4d30-b686-942663e223c6" (UID: "3a74af45-ed4e-4d30-b686-942663e223c6"). InnerVolumeSpecName "kube-api-access-xb7gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.928816 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a74af45-ed4e-4d30-b686-942663e223c6" (UID: "3a74af45-ed4e-4d30-b686-942663e223c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.008762 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.008795 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb7gf\" (UniqueName: \"kubernetes.io/projected/3a74af45-ed4e-4d30-b686-942663e223c6-kube-api-access-xb7gf\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.008806 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.142426 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bc5l2" event={"ID":"3a74af45-ed4e-4d30-b686-942663e223c6","Type":"ContainerDied","Data":"1dd118451b6794cd44f2978fc2e3be0c8174873ed3e0e0c9a87ba8e5481221b7"} Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.142514 4758 scope.go:117] "RemoveContainer" containerID="bf1daaf06aa1582152a53746198f0e79dd8b28ceb6882633260dce64e46fe554" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.142586 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.178300 4758 scope.go:117] "RemoveContainer" containerID="135ca0181a7c33753604da0d83c5eecfea5b146962a0bc2ffbf124c120de0fb0" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.178634 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bc5l2"] Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.181439 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bc5l2"] Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.192695 4758 scope.go:117] "RemoveContainer" containerID="9f60c8fda6e7e66a9633ce66ac886aa456dd7fafe83d70de6130a7dc9a89f0d9" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.281440 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w9whf"] Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.281699 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w9whf" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="registry-server" containerID="cri-o://e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2" gracePeriod=2 Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.619630 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.716813 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-utilities\") pod \"498010c8-fcda-4462-864d-88d7f70c2d54\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.716885 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-catalog-content\") pod \"498010c8-fcda-4462-864d-88d7f70c2d54\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.716914 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glx9s\" (UniqueName: \"kubernetes.io/projected/498010c8-fcda-4462-864d-88d7f70c2d54-kube-api-access-glx9s\") pod \"498010c8-fcda-4462-864d-88d7f70c2d54\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.717668 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-utilities" (OuterVolumeSpecName: "utilities") pod "498010c8-fcda-4462-864d-88d7f70c2d54" (UID: "498010c8-fcda-4462-864d-88d7f70c2d54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.723634 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498010c8-fcda-4462-864d-88d7f70c2d54-kube-api-access-glx9s" (OuterVolumeSpecName: "kube-api-access-glx9s") pod "498010c8-fcda-4462-864d-88d7f70c2d54" (UID: "498010c8-fcda-4462-864d-88d7f70c2d54"). InnerVolumeSpecName "kube-api-access-glx9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.818542 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.818574 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glx9s\" (UniqueName: \"kubernetes.io/projected/498010c8-fcda-4462-864d-88d7f70c2d54-kube-api-access-glx9s\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.861507 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "498010c8-fcda-4462-864d-88d7f70c2d54" (UID: "498010c8-fcda-4462-864d-88d7f70c2d54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.920008 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.149395 4758 generic.go:334] "Generic (PLEG): container finished" podID="498010c8-fcda-4462-864d-88d7f70c2d54" containerID="e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2" exitCode=0 Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.149438 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9whf" event={"ID":"498010c8-fcda-4462-864d-88d7f70c2d54","Type":"ContainerDied","Data":"e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2"} Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.149470 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9whf" event={"ID":"498010c8-fcda-4462-864d-88d7f70c2d54","Type":"ContainerDied","Data":"3ca3f01b986f56ea3541c35403f896246a5783f95ef8ac463437f4a2145f8bd8"} Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.149492 4758 scope.go:117] "RemoveContainer" containerID="e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.149497 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.174656 4758 scope.go:117] "RemoveContainer" containerID="ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.188265 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w9whf"] Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.191063 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w9whf"] Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.201252 4758 scope.go:117] "RemoveContainer" containerID="500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.226549 4758 scope.go:117] "RemoveContainer" containerID="e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2" Jan 30 08:33:43 crc kubenswrapper[4758]: E0130 08:33:43.226946 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2\": container with ID starting with e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2 not found: ID does not exist" containerID="e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.226978 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2"} err="failed to get container status \"e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2\": rpc error: code = NotFound desc = could not find container \"e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2\": container with ID starting with e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2 not found: ID does not exist" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.226997 4758 scope.go:117] "RemoveContainer" containerID="ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0" Jan 30 08:33:43 crc kubenswrapper[4758]: E0130 08:33:43.227391 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0\": container with ID starting with ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0 not found: ID does not exist" containerID="ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.227434 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0"} err="failed to get container status \"ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0\": rpc error: code = NotFound desc = could not find container \"ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0\": container with ID starting with ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0 not found: ID does not exist" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.227487 4758 scope.go:117] "RemoveContainer" containerID="500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336" Jan 30 08:33:43 crc kubenswrapper[4758]: E0130 08:33:43.227876 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336\": container with ID starting with 500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336 not found: ID does not exist" containerID="500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.227924 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336"} err="failed to get container status \"500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336\": rpc error: code = NotFound desc = could not find container \"500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336\": container with ID starting with 500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336 not found: ID does not exist" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.776934 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" path="/var/lib/kubelet/pods/3a74af45-ed4e-4d30-b686-942663e223c6/volumes" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.777537 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" path="/var/lib/kubelet/pods/498010c8-fcda-4462-864d-88d7f70c2d54/volumes" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.188467 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" podUID="92757cb7-5e41-4c2d-bbdf-0e4010e4611d" containerName="oauth-openshift" containerID="cri-o://0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34" gracePeriod=15 Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.586387 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747472 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-session\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747537 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-service-ca\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747597 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-trusted-ca-bundle\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747617 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-policies\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747640 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-error\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747675 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvl5k\" (UniqueName: \"kubernetes.io/projected/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-kube-api-access-qvl5k\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747695 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-ocp-branding-template\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747718 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-login\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747733 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-serving-cert\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747789 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-provider-selection\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747826 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-idp-0-file-data\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.748758 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-router-certs\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.748781 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-dir\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.748801 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-cliconfig\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.749269 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.749173 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.749438 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.750724 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.751189 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.754203 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.754545 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.754874 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.755462 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-kube-api-access-qvl5k" (OuterVolumeSpecName: "kube-api-access-qvl5k") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "kube-api-access-qvl5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.755693 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.758621 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.761501 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.768557 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.770291 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.849985 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850024 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850051 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850063 4758 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850073 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850082 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvl5k\" (UniqueName: \"kubernetes.io/projected/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-kube-api-access-qvl5k\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850092 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850100 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850109 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850120 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850130 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850139 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850147 4758 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850155 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.203585 4758 generic.go:334] "Generic (PLEG): container finished" podID="92757cb7-5e41-4c2d-bbdf-0e4010e4611d" containerID="0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34" exitCode=0 Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.203623 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" event={"ID":"92757cb7-5e41-4c2d-bbdf-0e4010e4611d","Type":"ContainerDied","Data":"0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34"} Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.203662 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.203681 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" event={"ID":"92757cb7-5e41-4c2d-bbdf-0e4010e4611d","Type":"ContainerDied","Data":"53b5888c06d75276ec1154090765c765987de449f501b8cfa950546c58f4dcb5"} Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.203705 4758 scope.go:117] "RemoveContainer" containerID="0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34" Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.223724 4758 scope.go:117] "RemoveContainer" containerID="0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34" Jan 30 08:33:51 crc kubenswrapper[4758]: E0130 08:33:51.224106 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34\": container with ID starting with 0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34 not found: ID does not exist" containerID="0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34" Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.224136 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34"} err="failed to get container status \"0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34\": rpc error: code = NotFound desc = could not find container \"0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34\": container with ID starting with 0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34 not found: ID does not exist" Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.251714 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ssdl5"] Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.255294 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ssdl5"] Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.779636 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92757cb7-5e41-4c2d-bbdf-0e4010e4611d" path="/var/lib/kubelet/pods/92757cb7-5e41-4c2d-bbdf-0e4010e4611d/volumes" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.960636 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-649d76d5b4-rjx9k"] Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.961773 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.961802 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.961830 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" containerName="extract-utilities" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.961847 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" containerName="extract-utilities" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.961862 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="extract-utilities" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.961875 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="extract-utilities" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.961891 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerName="extract-content" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.961905 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerName="extract-content" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.961925 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="extract-content" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.961939 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="extract-content" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.961958 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerName="extract-utilities" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.961975 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerName="extract-utilities" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.961996 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerName="extract-utilities" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962013 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerName="extract-utilities" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.962074 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92757cb7-5e41-4c2d-bbdf-0e4010e4611d" containerName="oauth-openshift" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962096 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="92757cb7-5e41-4c2d-bbdf-0e4010e4611d" containerName="oauth-openshift" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.962122 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962139 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.962157 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962171 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.962191 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" containerName="extract-content" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962203 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" containerName="extract-content" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.962221 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962234 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.962254 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerName="extract-content" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962266 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerName="extract-content" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962443 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962463 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962480 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962509 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962529 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="92757cb7-5e41-4c2d-bbdf-0e4010e4611d" containerName="oauth-openshift" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.963152 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.966887 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.967139 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.972171 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.972188 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.972285 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.972296 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.973015 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.973376 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.973396 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.973425 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.973452 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.974415 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.977268 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-serving-cert\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.977374 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-audit-dir\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.977452 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-router-certs\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.977491 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.977553 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.977593 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-error\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.977781 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-cliconfig\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.977957 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-session\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.978100 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5xjm\" (UniqueName: \"kubernetes.io/projected/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-kube-api-access-d5xjm\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.978170 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-service-ca\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.978227 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-login\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.978307 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-audit-policies\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.978344 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.978369 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.991985 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.995502 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.004920 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-649d76d5b4-rjx9k"] Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.010828 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079610 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-service-ca\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079696 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-login\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079742 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-audit-policies\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079767 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079794 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079836 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-serving-cert\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079855 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-audit-dir\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079886 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-router-certs\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079904 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079936 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079964 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-error\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079990 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-cliconfig\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.080023 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-session\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.080070 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5xjm\" (UniqueName: \"kubernetes.io/projected/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-kube-api-access-d5xjm\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.082008 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-service-ca\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.082913 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-cliconfig\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.083200 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.084897 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-audit-policies\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.085870 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-audit-dir\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.088681 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.089165 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-session\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.090509 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.091805 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.092882 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-serving-cert\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.093570 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-error\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.096023 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-router-certs\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.097179 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-login\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.101498 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5xjm\" (UniqueName: \"kubernetes.io/projected/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-kube-api-access-d5xjm\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.304186 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.523633 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-649d76d5b4-rjx9k"] Jan 30 08:34:01 crc kubenswrapper[4758]: I0130 08:34:01.266103 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" event={"ID":"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017","Type":"ContainerStarted","Data":"b77e2aabb71f998e385b8e95b1fc20acfd490d54a3d696f84af9f428b41cdab8"} Jan 30 08:34:01 crc kubenswrapper[4758]: I0130 08:34:01.266143 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" event={"ID":"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017","Type":"ContainerStarted","Data":"f8c27f1a34b6a31d3c73c2ff171a9b101be127e08bc5a7e6c51bc6b8fcf504ac"} Jan 30 08:34:01 crc kubenswrapper[4758]: I0130 08:34:01.267345 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:01 crc kubenswrapper[4758]: I0130 08:34:01.277022 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:01 crc kubenswrapper[4758]: I0130 08:34:01.307624 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" podStartSLOduration=36.307591232 podStartE2EDuration="36.307591232s" podCreationTimestamp="2026-01-30 08:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:34:01.299316183 +0000 UTC m=+246.271627774" watchObservedRunningTime="2026-01-30 08:34:01.307591232 +0000 UTC m=+246.279902783" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.560997 4758 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.561834 4758 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.562054 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4" gracePeriod=15 Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.562205 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.562521 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4" gracePeriod=15 Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.562570 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7" gracePeriod=15 Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.562675 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405" gracePeriod=15 Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.562600 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0" gracePeriod=15 Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.563910 4758 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 08:34:02 crc kubenswrapper[4758]: E0130 08:34:02.564049 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564060 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 08:34:02 crc kubenswrapper[4758]: E0130 08:34:02.564069 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564075 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:34:02 crc kubenswrapper[4758]: E0130 08:34:02.564083 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564090 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 08:34:02 crc kubenswrapper[4758]: E0130 08:34:02.564101 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564106 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 08:34:02 crc kubenswrapper[4758]: E0130 08:34:02.564113 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564118 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:34:02 crc kubenswrapper[4758]: E0130 08:34:02.564128 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564134 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 08:34:02 crc kubenswrapper[4758]: E0130 08:34:02.564141 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564147 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564266 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564276 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564283 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564294 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564302 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564311 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.617209 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.617579 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.617770 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.617935 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.618144 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.618321 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.618522 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.618792 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720238 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720281 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720307 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720325 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720329 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720365 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720343 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720394 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720387 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720482 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720465 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720495 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720538 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720571 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720679 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720768 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: E0130 08:34:02.814806 4758 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.176:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" volumeName="registry-storage" Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.278936 4758 generic.go:334] "Generic (PLEG): container finished" podID="05bc5468-8b30-42ea-a229-dba54dddcdaf" containerID="680d53edcf23e988cb0b58dfc2997e729a64c4eace90175659a7caa821078c76" exitCode=0 Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.279023 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"05bc5468-8b30-42ea-a229-dba54dddcdaf","Type":"ContainerDied","Data":"680d53edcf23e988cb0b58dfc2997e729a64c4eace90175659a7caa821078c76"} Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.279992 4758 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.282464 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.282601 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.284598 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.286006 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4" exitCode=0 Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.286037 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7" exitCode=0 Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.286050 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0" exitCode=0 Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.286080 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405" exitCode=2 Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.286138 4758 scope.go:117] "RemoveContainer" containerID="0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.293177 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.482931 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.483675 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.540610 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-kubelet-dir\") pod \"05bc5468-8b30-42ea-a229-dba54dddcdaf\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.540681 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-var-lock\") pod \"05bc5468-8b30-42ea-a229-dba54dddcdaf\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.540725 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05bc5468-8b30-42ea-a229-dba54dddcdaf-kube-api-access\") pod \"05bc5468-8b30-42ea-a229-dba54dddcdaf\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.540825 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "05bc5468-8b30-42ea-a229-dba54dddcdaf" (UID: "05bc5468-8b30-42ea-a229-dba54dddcdaf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.540839 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-var-lock" (OuterVolumeSpecName: "var-lock") pod "05bc5468-8b30-42ea-a229-dba54dddcdaf" (UID: "05bc5468-8b30-42ea-a229-dba54dddcdaf"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.540964 4758 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.540978 4758 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.547268 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05bc5468-8b30-42ea-a229-dba54dddcdaf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "05bc5468-8b30-42ea-a229-dba54dddcdaf" (UID: "05bc5468-8b30-42ea-a229-dba54dddcdaf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.650514 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05bc5468-8b30-42ea-a229-dba54dddcdaf-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.300603 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"05bc5468-8b30-42ea-a229-dba54dddcdaf","Type":"ContainerDied","Data":"f8494fe1b7bf7fd3655a15c8a2966956d30c2d2ed7f835a5e513367b2ae3ac68"} Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.300938 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8494fe1b7bf7fd3655a15c8a2966956d30c2d2ed7f835a5e513367b2ae3ac68" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.300629 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.302941 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.303879 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4" exitCode=0 Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.313161 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: E0130 08:34:05.454919 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: E0130 08:34:05.455493 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: E0130 08:34:05.455996 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: E0130 08:34:05.456483 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: E0130 08:34:05.457032 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.457127 4758 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 08:34:05 crc kubenswrapper[4758]: E0130 08:34:05.457569 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="200ms" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.611209 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.611776 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.612216 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.612445 4758 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: E0130 08:34:05.658633 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="400ms" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661381 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661459 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661490 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661596 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661613 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661711 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661956 4758 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661985 4758 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661998 4758 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.772909 4758 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.773300 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.775302 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 08:34:06 crc kubenswrapper[4758]: E0130 08:34:06.059251 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="800ms" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.311427 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.313005 4758 scope.go:117] "RemoveContainer" containerID="11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.313078 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.314017 4758 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.314448 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.317368 4758 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.317726 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.327838 4758 scope.go:117] "RemoveContainer" containerID="d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.338595 4758 scope.go:117] "RemoveContainer" containerID="52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.349266 4758 scope.go:117] "RemoveContainer" containerID="a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.360114 4758 scope.go:117] "RemoveContainer" containerID="31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.373409 4758 scope.go:117] "RemoveContainer" containerID="7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68" Jan 30 08:34:06 crc kubenswrapper[4758]: E0130 08:34:06.860222 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="1.6s" Jan 30 08:34:07 crc kubenswrapper[4758]: E0130 08:34:07.598898 4758 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.176:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:07 crc kubenswrapper[4758]: I0130 08:34:07.599314 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:07 crc kubenswrapper[4758]: E0130 08:34:07.631581 4758 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.176:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f7534a9fb1014 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 08:34:07.630446612 +0000 UTC m=+252.602758163,LastTimestamp:2026-01-30 08:34:07.630446612 +0000 UTC m=+252.602758163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 08:34:08 crc kubenswrapper[4758]: I0130 08:34:08.327005 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757"} Jan 30 08:34:08 crc kubenswrapper[4758]: I0130 08:34:08.327980 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"577461f943558d7111a8423f0be9e192864880a8d1c7ad568b2ed286e5531fb6"} Jan 30 08:34:08 crc kubenswrapper[4758]: I0130 08:34:08.328589 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:08 crc kubenswrapper[4758]: E0130 08:34:08.328594 4758 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.176:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:08 crc kubenswrapper[4758]: E0130 08:34:08.461218 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="3.2s" Jan 30 08:34:08 crc kubenswrapper[4758]: E0130 08:34:08.663655 4758 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.176:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f7534a9fb1014 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 08:34:07.630446612 +0000 UTC m=+252.602758163,LastTimestamp:2026-01-30 08:34:07.630446612 +0000 UTC m=+252.602758163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 08:34:09 crc kubenswrapper[4758]: E0130 08:34:09.333453 4758 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.176:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:11 crc kubenswrapper[4758]: E0130 08:34:11.662583 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="6.4s" Jan 30 08:34:15 crc kubenswrapper[4758]: I0130 08:34:15.365480 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 08:34:15 crc kubenswrapper[4758]: I0130 08:34:15.366020 4758 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208" exitCode=1 Jan 30 08:34:15 crc kubenswrapper[4758]: I0130 08:34:15.366087 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208"} Jan 30 08:34:15 crc kubenswrapper[4758]: I0130 08:34:15.366620 4758 scope.go:117] "RemoveContainer" containerID="dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208" Jan 30 08:34:15 crc kubenswrapper[4758]: I0130 08:34:15.366900 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:15 crc kubenswrapper[4758]: I0130 08:34:15.367541 4758 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:15 crc kubenswrapper[4758]: I0130 08:34:15.771664 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:15 crc kubenswrapper[4758]: I0130 08:34:15.772474 4758 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.037289 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.374879 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.374949 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"928ac18f142e91b14f4323406959a98f45ca48e134f7d6554009452fa582685f"} Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.376308 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.376672 4758 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: E0130 08:34:16.492758 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:34:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:34:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:34:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:34:16Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: E0130 08:34:16.493220 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: E0130 08:34:16.493672 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: E0130 08:34:16.494087 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: E0130 08:34:16.494342 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: E0130 08:34:16.494385 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.767768 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.770272 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.771122 4758 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.784267 4758 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.784311 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:16 crc kubenswrapper[4758]: E0130 08:34:16.784933 4758 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.785444 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:16 crc kubenswrapper[4758]: W0130 08:34:16.807565 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-1a31a90a6f7b84f95c028b4305344f3bc0a1f5b7fc7f1771cc999d07fcf7a347 WatchSource:0}: Error finding container 1a31a90a6f7b84f95c028b4305344f3bc0a1f5b7fc7f1771cc999d07fcf7a347: Status 404 returned error can't find the container with id 1a31a90a6f7b84f95c028b4305344f3bc0a1f5b7fc7f1771cc999d07fcf7a347 Jan 30 08:34:17 crc kubenswrapper[4758]: I0130 08:34:17.381144 4758 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="994db0503d06c8a0e6c1755ffca83d8f3b773cc2fb92c1a4849df5db46270e6d" exitCode=0 Jan 30 08:34:17 crc kubenswrapper[4758]: I0130 08:34:17.381288 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"994db0503d06c8a0e6c1755ffca83d8f3b773cc2fb92c1a4849df5db46270e6d"} Jan 30 08:34:17 crc kubenswrapper[4758]: I0130 08:34:17.381620 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1a31a90a6f7b84f95c028b4305344f3bc0a1f5b7fc7f1771cc999d07fcf7a347"} Jan 30 08:34:17 crc kubenswrapper[4758]: I0130 08:34:17.381915 4758 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:17 crc kubenswrapper[4758]: I0130 08:34:17.381932 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:17 crc kubenswrapper[4758]: I0130 08:34:17.382852 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:17 crc kubenswrapper[4758]: E0130 08:34:17.382941 4758 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:17 crc kubenswrapper[4758]: I0130 08:34:17.383194 4758 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:18 crc kubenswrapper[4758]: I0130 08:34:18.397733 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d3b6063398ec7c6205189ec55df2092e5a6b3bd041e8e05e4a8f7d797007069e"} Jan 30 08:34:18 crc kubenswrapper[4758]: I0130 08:34:18.398134 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"11699d01ad01c89bf19be1f0e389482dd270044fa8b15b80ab0e5312f3a77d04"} Jan 30 08:34:18 crc kubenswrapper[4758]: I0130 08:34:18.398150 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"76db6a6575829613d713514dfe407c3adb132e57a1a9ea32d281c2c84a2638c9"} Jan 30 08:34:18 crc kubenswrapper[4758]: I0130 08:34:18.398162 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ff2b56706c15dbf79f056f889c62678cea1f5108e6dfeab706d716de046a9891"} Jan 30 08:34:19 crc kubenswrapper[4758]: I0130 08:34:19.405867 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"50627c6b6f71291632d1ef441f903ebea9406343bb93d10caf6075404f38d442"} Jan 30 08:34:19 crc kubenswrapper[4758]: I0130 08:34:19.406157 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:19 crc kubenswrapper[4758]: I0130 08:34:19.406183 4758 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:19 crc kubenswrapper[4758]: I0130 08:34:19.406207 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:20 crc kubenswrapper[4758]: I0130 08:34:20.638806 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:34:21 crc kubenswrapper[4758]: I0130 08:34:21.786485 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:21 crc kubenswrapper[4758]: I0130 08:34:21.786560 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:21 crc kubenswrapper[4758]: I0130 08:34:21.791290 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:24 crc kubenswrapper[4758]: I0130 08:34:24.420596 4758 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:25 crc kubenswrapper[4758]: I0130 08:34:25.439637 4758 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:25 crc kubenswrapper[4758]: I0130 08:34:25.439690 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:25 crc kubenswrapper[4758]: I0130 08:34:25.444715 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:25 crc kubenswrapper[4758]: I0130 08:34:25.798949 4758 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b1e8c927-3894-4805-9894-26461b80c7dd" Jan 30 08:34:26 crc kubenswrapper[4758]: I0130 08:34:26.037882 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:34:26 crc kubenswrapper[4758]: I0130 08:34:26.041733 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:34:26 crc kubenswrapper[4758]: I0130 08:34:26.444422 4758 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:26 crc kubenswrapper[4758]: I0130 08:34:26.444455 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:26 crc kubenswrapper[4758]: I0130 08:34:26.448429 4758 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b1e8c927-3894-4805-9894-26461b80c7dd" Jan 30 08:34:26 crc kubenswrapper[4758]: I0130 08:34:26.449525 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:34:33 crc kubenswrapper[4758]: I0130 08:34:33.890118 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 08:34:34 crc kubenswrapper[4758]: I0130 08:34:34.396638 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 08:34:34 crc kubenswrapper[4758]: I0130 08:34:34.670917 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 08:34:34 crc kubenswrapper[4758]: I0130 08:34:34.830373 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 08:34:35 crc kubenswrapper[4758]: I0130 08:34:35.063087 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 08:34:35 crc kubenswrapper[4758]: I0130 08:34:35.307924 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 08:34:35 crc kubenswrapper[4758]: I0130 08:34:35.312657 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 08:34:35 crc kubenswrapper[4758]: I0130 08:34:35.457194 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 08:34:35 crc kubenswrapper[4758]: I0130 08:34:35.465421 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 08:34:35 crc kubenswrapper[4758]: I0130 08:34:35.666622 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 08:34:35 crc kubenswrapper[4758]: I0130 08:34:35.721970 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 08:34:35 crc kubenswrapper[4758]: I0130 08:34:35.762549 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 08:34:36 crc kubenswrapper[4758]: I0130 08:34:36.135876 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 08:34:36 crc kubenswrapper[4758]: I0130 08:34:36.187321 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 08:34:36 crc kubenswrapper[4758]: I0130 08:34:36.416856 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 08:34:36 crc kubenswrapper[4758]: I0130 08:34:36.500511 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 08:34:36 crc kubenswrapper[4758]: I0130 08:34:36.666189 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 08:34:36 crc kubenswrapper[4758]: I0130 08:34:36.685335 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.099029 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.291604 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.316356 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.358525 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.386241 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.427116 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.526118 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.532694 4758 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.554847 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.608806 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.654857 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.686022 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.695559 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.886786 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.958793 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.027419 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.030574 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.191714 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.459506 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.587980 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.654137 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.691896 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.841257 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.910220 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.933838 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.985710 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.018421 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.023618 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.045490 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.098657 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.139785 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.142672 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.211146 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.225497 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.358540 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.397475 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.483285 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.521482 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.597768 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.671290 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.708988 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.732717 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.750505 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.756380 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.843561 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.878162 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.917758 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.944905 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.046783 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.142239 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.172390 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.219086 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.225124 4758 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.239539 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.239766 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.300832 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.361235 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.380934 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.466773 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.483197 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.512409 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.754773 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.757629 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.845950 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.925115 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.058271 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.102488 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.246351 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.246715 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.302108 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.450493 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.511552 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.516743 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.553787 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.619521 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.711869 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.747377 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.805596 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.827818 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.968704 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.972419 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.052554 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.188890 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.264697 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.318754 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.342345 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.408929 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.425630 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.463995 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.466121 4758 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.478026 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.541092 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.725338 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.734672 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.754674 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.840813 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.853874 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.871686 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.954375 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.983598 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.003216 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.091809 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.097395 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.141367 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.202128 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.205006 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.248598 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.302570 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.334872 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.449683 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.489416 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.540225 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.616566 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.620088 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.623881 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.672479 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.689831 4758 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.717578 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.762174 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.835335 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.871687 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.968503 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.025400 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.080379 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.122247 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.150781 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.179498 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.386819 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.397186 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.418652 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.426112 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.522420 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.522748 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.582750 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.642119 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.683701 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.704190 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.704468 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.707593 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.723340 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.727631 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.760096 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.768538 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.823801 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.842263 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.869118 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.921249 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.000637 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.075456 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.135715 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.141374 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.213801 4758 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.294771 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.346371 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.393260 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.393504 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.413933 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.423435 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.464192 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.465335 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.489580 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.547662 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.560038 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.609730 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.665693 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.673244 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.714306 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.717164 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.808257 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.870933 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.954454 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.977314 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.007300 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.053192 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.086164 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.192825 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.220768 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.230704 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.237720 4758 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.241863 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.241908 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.247693 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.255900 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.258962 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.285997 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.285979238 podStartE2EDuration="22.285979238s" podCreationTimestamp="2026-01-30 08:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:34:46.264322292 +0000 UTC m=+291.236633843" watchObservedRunningTime="2026-01-30 08:34:46.285979238 +0000 UTC m=+291.258290789" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.407334 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.594659 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.737771 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.737872 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.860712 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.883809 4758 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.884062 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757" gracePeriod=5 Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.909564 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.911618 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.011876 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.012467 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.051018 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.064912 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.070022 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.089679 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.106808 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.112241 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.177824 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.216091 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.218666 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.300847 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.342381 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.473494 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.475274 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.566262 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.585916 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.647270 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.700217 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.728648 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.741437 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.150846 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.152756 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.265888 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.365609 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.381930 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.489568 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.498564 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.626871 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.666417 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.758668 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.769283 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.814605 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 08:34:49 crc kubenswrapper[4758]: I0130 08:34:49.031216 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 08:34:49 crc kubenswrapper[4758]: I0130 08:34:49.037616 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 08:34:49 crc kubenswrapper[4758]: I0130 08:34:49.084594 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 08:34:49 crc kubenswrapper[4758]: I0130 08:34:49.687903 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 08:34:49 crc kubenswrapper[4758]: I0130 08:34:49.827479 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 08:34:50 crc kubenswrapper[4758]: I0130 08:34:50.050574 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 08:34:50 crc kubenswrapper[4758]: I0130 08:34:50.219461 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 08:34:50 crc kubenswrapper[4758]: I0130 08:34:50.271504 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 08:34:50 crc kubenswrapper[4758]: I0130 08:34:50.387411 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 08:34:50 crc kubenswrapper[4758]: I0130 08:34:50.586665 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 08:34:50 crc kubenswrapper[4758]: I0130 08:34:50.878201 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 08:34:51 crc kubenswrapper[4758]: I0130 08:34:51.661279 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.459886 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.460223 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490451 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490492 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490586 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490617 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490639 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490712 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490715 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490735 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490766 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.491029 4758 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.491086 4758 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.491101 4758 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.491111 4758 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.497443 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.585603 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.585663 4758 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757" exitCode=137 Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.585715 4758 scope.go:117] "RemoveContainer" containerID="8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.585761 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.592997 4758 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.617176 4758 scope.go:117] "RemoveContainer" containerID="8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757" Jan 30 08:34:52 crc kubenswrapper[4758]: E0130 08:34:52.617670 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757\": container with ID starting with 8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757 not found: ID does not exist" containerID="8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.617727 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757"} err="failed to get container status \"8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757\": rpc error: code = NotFound desc = could not find container \"8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757\": container with ID starting with 8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757 not found: ID does not exist" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.756735 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 08:34:53 crc kubenswrapper[4758]: I0130 08:34:53.774855 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 08:34:55 crc kubenswrapper[4758]: I0130 08:34:55.533916 4758 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 08:35:06 crc kubenswrapper[4758]: I0130 08:35:06.658280 4758 generic.go:334] "Generic (PLEG): container finished" podID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerID="00e032b52468c18324073e8f68c53a2950d0c1e9e92eb63a8cddb5aaf6d5f40e" exitCode=0 Jan 30 08:35:06 crc kubenswrapper[4758]: I0130 08:35:06.658351 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" event={"ID":"49a66643-dc8c-4c84-9345-7f98676dc1d3","Type":"ContainerDied","Data":"00e032b52468c18324073e8f68c53a2950d0c1e9e92eb63a8cddb5aaf6d5f40e"} Jan 30 08:35:06 crc kubenswrapper[4758]: I0130 08:35:06.659378 4758 scope.go:117] "RemoveContainer" containerID="00e032b52468c18324073e8f68c53a2950d0c1e9e92eb63a8cddb5aaf6d5f40e" Jan 30 08:35:06 crc kubenswrapper[4758]: I0130 08:35:06.915419 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:35:06 crc kubenswrapper[4758]: I0130 08:35:06.916013 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:35:07 crc kubenswrapper[4758]: I0130 08:35:07.667818 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" event={"ID":"49a66643-dc8c-4c84-9345-7f98676dc1d3","Type":"ContainerStarted","Data":"9289bf34b4fd965a08ae479d84fd68bacbc2b99807df3ddbee8cd5dde015a76b"} Jan 30 08:35:07 crc kubenswrapper[4758]: I0130 08:35:07.668511 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:35:07 crc kubenswrapper[4758]: I0130 08:35:07.672221 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.006723 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9kt48"] Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.007521 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" podUID="deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" containerName="controller-manager" containerID="cri-o://483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d" gracePeriod=30 Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.112936 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7"] Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.113155 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" podUID="31897db9-9dd1-42a9-8eae-b5e13e113a3c" containerName="route-controller-manager" containerID="cri-o://f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83" gracePeriod=30 Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.343744 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.420695 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543145 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-config\") pod \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543210 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31897db9-9dd1-42a9-8eae-b5e13e113a3c-serving-cert\") pod \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543261 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-client-ca\") pod \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543346 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-client-ca\") pod \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543395 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-serving-cert\") pod \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543458 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-proxy-ca-bundles\") pod \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543533 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg4sc\" (UniqueName: \"kubernetes.io/projected/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-kube-api-access-gg4sc\") pod \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543577 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-config\") pod \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543617 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6shg\" (UniqueName: \"kubernetes.io/projected/31897db9-9dd1-42a9-8eae-b5e13e113a3c-kube-api-access-m6shg\") pod \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.544109 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-client-ca" (OuterVolumeSpecName: "client-ca") pod "31897db9-9dd1-42a9-8eae-b5e13e113a3c" (UID: "31897db9-9dd1-42a9-8eae-b5e13e113a3c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.544195 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-config" (OuterVolumeSpecName: "config") pod "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" (UID: "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.544358 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-client-ca" (OuterVolumeSpecName: "client-ca") pod "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" (UID: "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.544748 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" (UID: "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.544882 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-config" (OuterVolumeSpecName: "config") pod "31897db9-9dd1-42a9-8eae-b5e13e113a3c" (UID: "31897db9-9dd1-42a9-8eae-b5e13e113a3c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.549275 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-kube-api-access-gg4sc" (OuterVolumeSpecName: "kube-api-access-gg4sc") pod "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" (UID: "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a"). InnerVolumeSpecName "kube-api-access-gg4sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.549280 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31897db9-9dd1-42a9-8eae-b5e13e113a3c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "31897db9-9dd1-42a9-8eae-b5e13e113a3c" (UID: "31897db9-9dd1-42a9-8eae-b5e13e113a3c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.549350 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31897db9-9dd1-42a9-8eae-b5e13e113a3c-kube-api-access-m6shg" (OuterVolumeSpecName: "kube-api-access-m6shg") pod "31897db9-9dd1-42a9-8eae-b5e13e113a3c" (UID: "31897db9-9dd1-42a9-8eae-b5e13e113a3c"). InnerVolumeSpecName "kube-api-access-m6shg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.550352 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" (UID: "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645343 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645397 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645421 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645447 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg4sc\" (UniqueName: \"kubernetes.io/projected/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-kube-api-access-gg4sc\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645466 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645483 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6shg\" (UniqueName: \"kubernetes.io/projected/31897db9-9dd1-42a9-8eae-b5e13e113a3c-kube-api-access-m6shg\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645497 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645555 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31897db9-9dd1-42a9-8eae-b5e13e113a3c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645570 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.841983 4758 generic.go:334] "Generic (PLEG): container finished" podID="deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" containerID="483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d" exitCode=0 Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.842076 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" event={"ID":"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a","Type":"ContainerDied","Data":"483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d"} Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.842160 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" event={"ID":"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a","Type":"ContainerDied","Data":"1a4096fd26f7cd35b668bd3b74bb0a13df67390a721f7d3ce8bb74a60052f47a"} Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.842157 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.842223 4758 scope.go:117] "RemoveContainer" containerID="483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.845892 4758 generic.go:334] "Generic (PLEG): container finished" podID="31897db9-9dd1-42a9-8eae-b5e13e113a3c" containerID="f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83" exitCode=0 Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.845942 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" event={"ID":"31897db9-9dd1-42a9-8eae-b5e13e113a3c","Type":"ContainerDied","Data":"f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83"} Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.845969 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" event={"ID":"31897db9-9dd1-42a9-8eae-b5e13e113a3c","Type":"ContainerDied","Data":"c8e9069e7ec2c171424e1bd93a4b8f9855a2158a2e3b30930d28fb757cb65678"} Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.846017 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.865273 4758 scope.go:117] "RemoveContainer" containerID="483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d" Jan 30 08:35:42 crc kubenswrapper[4758]: E0130 08:35:42.865689 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d\": container with ID starting with 483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d not found: ID does not exist" containerID="483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.865786 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d"} err="failed to get container status \"483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d\": rpc error: code = NotFound desc = could not find container \"483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d\": container with ID starting with 483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d not found: ID does not exist" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.865877 4758 scope.go:117] "RemoveContainer" containerID="f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.879198 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9kt48"] Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.883476 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9kt48"] Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.895925 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7"] Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.899078 4758 scope.go:117] "RemoveContainer" containerID="f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83" Jan 30 08:35:42 crc kubenswrapper[4758]: E0130 08:35:42.899916 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83\": container with ID starting with f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83 not found: ID does not exist" containerID="f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.899952 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83"} err="failed to get container status \"f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83\": rpc error: code = NotFound desc = could not find container \"f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83\": container with ID starting with f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83 not found: ID does not exist" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.900114 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7"] Jan 30 08:35:43 crc kubenswrapper[4758]: I0130 08:35:43.776144 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31897db9-9dd1-42a9-8eae-b5e13e113a3c" path="/var/lib/kubelet/pods/31897db9-9dd1-42a9-8eae-b5e13e113a3c/volumes" Jan 30 08:35:43 crc kubenswrapper[4758]: I0130 08:35:43.777305 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" path="/var/lib/kubelet/pods/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a/volumes" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038527 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v"] Jan 30 08:35:44 crc kubenswrapper[4758]: E0130 08:35:44.038784 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" containerName="controller-manager" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038798 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" containerName="controller-manager" Jan 30 08:35:44 crc kubenswrapper[4758]: E0130 08:35:44.038810 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038816 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 08:35:44 crc kubenswrapper[4758]: E0130 08:35:44.038830 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31897db9-9dd1-42a9-8eae-b5e13e113a3c" containerName="route-controller-manager" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038837 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="31897db9-9dd1-42a9-8eae-b5e13e113a3c" containerName="route-controller-manager" Jan 30 08:35:44 crc kubenswrapper[4758]: E0130 08:35:44.038853 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" containerName="installer" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038860 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" containerName="installer" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038973 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" containerName="installer" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038981 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" containerName="controller-manager" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038988 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038997 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="31897db9-9dd1-42a9-8eae-b5e13e113a3c" containerName="route-controller-manager" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.039412 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.041365 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.041837 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.042058 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.042290 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.044308 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.046136 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp"] Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.049684 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.051365 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp"] Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.054719 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.054922 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.055051 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.055351 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.055669 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.056136 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.056654 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.056979 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.062996 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v"] Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162510 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nbnv\" (UniqueName: \"kubernetes.io/projected/2ef3df9a-441b-45c6-ae42-22bf2ced1596-kube-api-access-4nbnv\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162561 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-config\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162583 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-client-ca\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162603 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-client-ca\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162625 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-proxy-ca-bundles\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162652 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddtrd\" (UniqueName: \"kubernetes.io/projected/19653811-dadf-4512-9f60-60cd3fcb9dc3-kube-api-access-ddtrd\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162672 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef3df9a-441b-45c6-ae42-22bf2ced1596-serving-cert\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162695 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19653811-dadf-4512-9f60-60cd3fcb9dc3-serving-cert\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162719 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-config\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.263958 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nbnv\" (UniqueName: \"kubernetes.io/projected/2ef3df9a-441b-45c6-ae42-22bf2ced1596-kube-api-access-4nbnv\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.264026 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-config\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.264113 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-client-ca\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.264143 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-client-ca\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.264185 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-proxy-ca-bundles\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.264260 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddtrd\" (UniqueName: \"kubernetes.io/projected/19653811-dadf-4512-9f60-60cd3fcb9dc3-kube-api-access-ddtrd\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.264299 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef3df9a-441b-45c6-ae42-22bf2ced1596-serving-cert\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.264352 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19653811-dadf-4512-9f60-60cd3fcb9dc3-serving-cert\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.264404 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-config\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.265386 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-client-ca\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.265492 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-config\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.265498 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-client-ca\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.265929 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-proxy-ca-bundles\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.266913 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-config\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.269528 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef3df9a-441b-45c6-ae42-22bf2ced1596-serving-cert\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.278664 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19653811-dadf-4512-9f60-60cd3fcb9dc3-serving-cert\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.285889 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddtrd\" (UniqueName: \"kubernetes.io/projected/19653811-dadf-4512-9f60-60cd3fcb9dc3-kube-api-access-ddtrd\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.286173 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nbnv\" (UniqueName: \"kubernetes.io/projected/2ef3df9a-441b-45c6-ae42-22bf2ced1596-kube-api-access-4nbnv\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.387924 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.396879 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.589895 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp"] Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.627153 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v"] Jan 30 08:35:44 crc kubenswrapper[4758]: W0130 08:35:44.635544 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ef3df9a_441b_45c6_ae42_22bf2ced1596.slice/crio-cb8e484918da6d053e525eab6cca1fdc7e6d78333b6e1bd25eabb42c722a0bd4 WatchSource:0}: Error finding container cb8e484918da6d053e525eab6cca1fdc7e6d78333b6e1bd25eabb42c722a0bd4: Status 404 returned error can't find the container with id cb8e484918da6d053e525eab6cca1fdc7e6d78333b6e1bd25eabb42c722a0bd4 Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.857401 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" event={"ID":"2ef3df9a-441b-45c6-ae42-22bf2ced1596","Type":"ContainerStarted","Data":"0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368"} Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.857709 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" event={"ID":"2ef3df9a-441b-45c6-ae42-22bf2ced1596","Type":"ContainerStarted","Data":"cb8e484918da6d053e525eab6cca1fdc7e6d78333b6e1bd25eabb42c722a0bd4"} Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.858024 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.858753 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" event={"ID":"19653811-dadf-4512-9f60-60cd3fcb9dc3","Type":"ContainerStarted","Data":"d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee"} Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.858771 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" event={"ID":"19653811-dadf-4512-9f60-60cd3fcb9dc3","Type":"ContainerStarted","Data":"c4d0e2b649449ef67d140345bed4ab7b2439ed578b46e3a77eefc092f8031eec"} Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.858985 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.865911 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.898891 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" podStartSLOduration=2.898869485 podStartE2EDuration="2.898869485s" podCreationTimestamp="2026-01-30 08:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:35:44.883703796 +0000 UTC m=+349.856015337" watchObservedRunningTime="2026-01-30 08:35:44.898869485 +0000 UTC m=+349.871181046" Jan 30 08:35:45 crc kubenswrapper[4758]: I0130 08:35:45.134723 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:45 crc kubenswrapper[4758]: I0130 08:35:45.154588 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" podStartSLOduration=3.154568576 podStartE2EDuration="3.154568576s" podCreationTimestamp="2026-01-30 08:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:35:44.94259006 +0000 UTC m=+349.914901621" watchObservedRunningTime="2026-01-30 08:35:45.154568576 +0000 UTC m=+350.126880127" Jan 30 08:35:48 crc kubenswrapper[4758]: I0130 08:35:48.833007 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v"] Jan 30 08:35:48 crc kubenswrapper[4758]: I0130 08:35:48.834330 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" podUID="2ef3df9a-441b-45c6-ae42-22bf2ced1596" containerName="controller-manager" containerID="cri-o://0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368" gracePeriod=30 Jan 30 08:35:48 crc kubenswrapper[4758]: I0130 08:35:48.861411 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp"] Jan 30 08:35:48 crc kubenswrapper[4758]: I0130 08:35:48.861591 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" podUID="19653811-dadf-4512-9f60-60cd3fcb9dc3" containerName="route-controller-manager" containerID="cri-o://d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee" gracePeriod=30 Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.332934 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.381970 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.438503 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-client-ca\") pod \"19653811-dadf-4512-9f60-60cd3fcb9dc3\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.439307 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nbnv\" (UniqueName: \"kubernetes.io/projected/2ef3df9a-441b-45c6-ae42-22bf2ced1596-kube-api-access-4nbnv\") pod \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440171 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-proxy-ca-bundles\") pod \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440220 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-config\") pod \"19653811-dadf-4512-9f60-60cd3fcb9dc3\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440319 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-client-ca\") pod \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440356 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddtrd\" (UniqueName: \"kubernetes.io/projected/19653811-dadf-4512-9f60-60cd3fcb9dc3-kube-api-access-ddtrd\") pod \"19653811-dadf-4512-9f60-60cd3fcb9dc3\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440386 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19653811-dadf-4512-9f60-60cd3fcb9dc3-serving-cert\") pod \"19653811-dadf-4512-9f60-60cd3fcb9dc3\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440493 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-config\") pod \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440539 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef3df9a-441b-45c6-ae42-22bf2ced1596-serving-cert\") pod \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.439244 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-client-ca" (OuterVolumeSpecName: "client-ca") pod "19653811-dadf-4512-9f60-60cd3fcb9dc3" (UID: "19653811-dadf-4512-9f60-60cd3fcb9dc3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440948 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-config" (OuterVolumeSpecName: "config") pod "19653811-dadf-4512-9f60-60cd3fcb9dc3" (UID: "19653811-dadf-4512-9f60-60cd3fcb9dc3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440968 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2ef3df9a-441b-45c6-ae42-22bf2ced1596" (UID: "2ef3df9a-441b-45c6-ae42-22bf2ced1596"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.441864 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-client-ca" (OuterVolumeSpecName: "client-ca") pod "2ef3df9a-441b-45c6-ae42-22bf2ced1596" (UID: "2ef3df9a-441b-45c6-ae42-22bf2ced1596"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.441932 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-config" (OuterVolumeSpecName: "config") pod "2ef3df9a-441b-45c6-ae42-22bf2ced1596" (UID: "2ef3df9a-441b-45c6-ae42-22bf2ced1596"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.446667 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ef3df9a-441b-45c6-ae42-22bf2ced1596-kube-api-access-4nbnv" (OuterVolumeSpecName: "kube-api-access-4nbnv") pod "2ef3df9a-441b-45c6-ae42-22bf2ced1596" (UID: "2ef3df9a-441b-45c6-ae42-22bf2ced1596"). InnerVolumeSpecName "kube-api-access-4nbnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.450618 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19653811-dadf-4512-9f60-60cd3fcb9dc3-kube-api-access-ddtrd" (OuterVolumeSpecName: "kube-api-access-ddtrd") pod "19653811-dadf-4512-9f60-60cd3fcb9dc3" (UID: "19653811-dadf-4512-9f60-60cd3fcb9dc3"). InnerVolumeSpecName "kube-api-access-ddtrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.450633 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19653811-dadf-4512-9f60-60cd3fcb9dc3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "19653811-dadf-4512-9f60-60cd3fcb9dc3" (UID: "19653811-dadf-4512-9f60-60cd3fcb9dc3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.450707 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef3df9a-441b-45c6-ae42-22bf2ced1596-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2ef3df9a-441b-45c6-ae42-22bf2ced1596" (UID: "2ef3df9a-441b-45c6-ae42-22bf2ced1596"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.542090 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.542742 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef3df9a-441b-45c6-ae42-22bf2ced1596-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.542818 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.542877 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nbnv\" (UniqueName: \"kubernetes.io/projected/2ef3df9a-441b-45c6-ae42-22bf2ced1596-kube-api-access-4nbnv\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.542944 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.543009 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.543108 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.543174 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddtrd\" (UniqueName: \"kubernetes.io/projected/19653811-dadf-4512-9f60-60cd3fcb9dc3-kube-api-access-ddtrd\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.543233 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19653811-dadf-4512-9f60-60cd3fcb9dc3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.884815 4758 generic.go:334] "Generic (PLEG): container finished" podID="2ef3df9a-441b-45c6-ae42-22bf2ced1596" containerID="0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368" exitCode=0 Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.884906 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" event={"ID":"2ef3df9a-441b-45c6-ae42-22bf2ced1596","Type":"ContainerDied","Data":"0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368"} Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.884961 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" event={"ID":"2ef3df9a-441b-45c6-ae42-22bf2ced1596","Type":"ContainerDied","Data":"cb8e484918da6d053e525eab6cca1fdc7e6d78333b6e1bd25eabb42c722a0bd4"} Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.884981 4758 scope.go:117] "RemoveContainer" containerID="0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.885291 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.901906 4758 generic.go:334] "Generic (PLEG): container finished" podID="19653811-dadf-4512-9f60-60cd3fcb9dc3" containerID="d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee" exitCode=0 Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.901956 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" event={"ID":"19653811-dadf-4512-9f60-60cd3fcb9dc3","Type":"ContainerDied","Data":"d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee"} Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.901981 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" event={"ID":"19653811-dadf-4512-9f60-60cd3fcb9dc3","Type":"ContainerDied","Data":"c4d0e2b649449ef67d140345bed4ab7b2439ed578b46e3a77eefc092f8031eec"} Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.902124 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.903113 4758 scope.go:117] "RemoveContainer" containerID="0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368" Jan 30 08:35:49 crc kubenswrapper[4758]: E0130 08:35:49.903535 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368\": container with ID starting with 0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368 not found: ID does not exist" containerID="0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.903619 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368"} err="failed to get container status \"0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368\": rpc error: code = NotFound desc = could not find container \"0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368\": container with ID starting with 0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368 not found: ID does not exist" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.903638 4758 scope.go:117] "RemoveContainer" containerID="d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.925015 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v"] Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.925153 4758 scope.go:117] "RemoveContainer" containerID="d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee" Jan 30 08:35:49 crc kubenswrapper[4758]: E0130 08:35:49.925744 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee\": container with ID starting with d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee not found: ID does not exist" containerID="d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.925773 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee"} err="failed to get container status \"d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee\": rpc error: code = NotFound desc = could not find container \"d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee\": container with ID starting with d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee not found: ID does not exist" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.929076 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v"] Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.932414 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp"] Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.935810 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp"] Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.044105 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-jxkdj"] Jan 30 08:35:50 crc kubenswrapper[4758]: E0130 08:35:50.044323 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19653811-dadf-4512-9f60-60cd3fcb9dc3" containerName="route-controller-manager" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.044334 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="19653811-dadf-4512-9f60-60cd3fcb9dc3" containerName="route-controller-manager" Jan 30 08:35:50 crc kubenswrapper[4758]: E0130 08:35:50.044348 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef3df9a-441b-45c6-ae42-22bf2ced1596" containerName="controller-manager" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.044353 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef3df9a-441b-45c6-ae42-22bf2ced1596" containerName="controller-manager" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.044469 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="19653811-dadf-4512-9f60-60cd3fcb9dc3" containerName="route-controller-manager" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.044483 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef3df9a-441b-45c6-ae42-22bf2ced1596" containerName="controller-manager" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.044914 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.048148 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.050555 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-client-ca\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.050619 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zjt2\" (UniqueName: \"kubernetes.io/projected/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-kube-api-access-2zjt2\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.050657 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-config\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.050683 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-serving-cert\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.050714 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-proxy-ca-bundles\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.055806 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.064577 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.064604 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.064790 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.064924 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.065159 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.068884 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-jxkdj"] Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.152488 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-client-ca\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.152578 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zjt2\" (UniqueName: \"kubernetes.io/projected/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-kube-api-access-2zjt2\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.152624 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-config\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.152665 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-serving-cert\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.152708 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-proxy-ca-bundles\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.153687 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-client-ca\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.155350 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-config\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.155762 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-proxy-ca-bundles\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.163244 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-serving-cert\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.169146 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zjt2\" (UniqueName: \"kubernetes.io/projected/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-kube-api-access-2zjt2\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.368273 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.572855 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-jxkdj"] Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.909114 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" event={"ID":"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5","Type":"ContainerStarted","Data":"5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6"} Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.909174 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" event={"ID":"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5","Type":"ContainerStarted","Data":"952f87f394f721284daffcec11f065bb01a614ab3053badb7c5d9e39e272998d"} Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.909190 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.924168 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.934383 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" podStartSLOduration=1.934359666 podStartE2EDuration="1.934359666s" podCreationTimestamp="2026-01-30 08:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:35:50.930023891 +0000 UTC m=+355.902335452" watchObservedRunningTime="2026-01-30 08:35:50.934359666 +0000 UTC m=+355.906671227" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.042607 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg"] Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.043277 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.046019 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.046078 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.046223 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.046270 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.046443 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.047754 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.063386 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg"] Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.066429 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d487c06-79ec-4b43-9bea-781f8fb2ec82-serving-cert\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.066622 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-client-ca\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.066724 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-config\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.066854 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ct27\" (UniqueName: \"kubernetes.io/projected/0d487c06-79ec-4b43-9bea-781f8fb2ec82-kube-api-access-4ct27\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.168200 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d487c06-79ec-4b43-9bea-781f8fb2ec82-serving-cert\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.169554 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-client-ca\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.170084 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-config\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.170197 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ct27\" (UniqueName: \"kubernetes.io/projected/0d487c06-79ec-4b43-9bea-781f8fb2ec82-kube-api-access-4ct27\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.171612 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-client-ca\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.171948 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-config\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.186156 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d487c06-79ec-4b43-9bea-781f8fb2ec82-serving-cert\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.186902 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ct27\" (UniqueName: \"kubernetes.io/projected/0d487c06-79ec-4b43-9bea-781f8fb2ec82-kube-api-access-4ct27\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.356952 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.554416 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg"] Jan 30 08:35:51 crc kubenswrapper[4758]: W0130 08:35:51.557704 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d487c06_79ec_4b43_9bea_781f8fb2ec82.slice/crio-e71e8270854b48bce6c12435853f17ac3a3eb3dd31d274b31bb292c04c914ea2 WatchSource:0}: Error finding container e71e8270854b48bce6c12435853f17ac3a3eb3dd31d274b31bb292c04c914ea2: Status 404 returned error can't find the container with id e71e8270854b48bce6c12435853f17ac3a3eb3dd31d274b31bb292c04c914ea2 Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.780295 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19653811-dadf-4512-9f60-60cd3fcb9dc3" path="/var/lib/kubelet/pods/19653811-dadf-4512-9f60-60cd3fcb9dc3/volumes" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.781725 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ef3df9a-441b-45c6-ae42-22bf2ced1596" path="/var/lib/kubelet/pods/2ef3df9a-441b-45c6-ae42-22bf2ced1596/volumes" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.917796 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" event={"ID":"0d487c06-79ec-4b43-9bea-781f8fb2ec82","Type":"ContainerStarted","Data":"cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b"} Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.917850 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" event={"ID":"0d487c06-79ec-4b43-9bea-781f8fb2ec82","Type":"ContainerStarted","Data":"e71e8270854b48bce6c12435853f17ac3a3eb3dd31d274b31bb292c04c914ea2"} Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.933629 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" podStartSLOduration=2.933611316 podStartE2EDuration="2.933611316s" podCreationTimestamp="2026-01-30 08:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:35:51.930097047 +0000 UTC m=+356.902408608" watchObservedRunningTime="2026-01-30 08:35:51.933611316 +0000 UTC m=+356.905922867" Jan 30 08:35:52 crc kubenswrapper[4758]: I0130 08:35:52.387881 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:35:52 crc kubenswrapper[4758]: I0130 08:35:52.387948 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:35:52 crc kubenswrapper[4758]: I0130 08:35:52.923020 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:52 crc kubenswrapper[4758]: I0130 08:35:52.926933 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.058680 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-f5m97"] Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.060295 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.076210 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-f5m97"] Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.241328 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bae986b6-2a59-4358-a530-067256d2e6fe-registry-certificates\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.241690 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-registry-tls\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.241829 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-bound-sa-token\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.241855 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bae986b6-2a59-4358-a530-067256d2e6fe-trusted-ca\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.241927 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.242391 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bae986b6-2a59-4358-a530-067256d2e6fe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.242421 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6pqm\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-kube-api-access-b6pqm\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.242440 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bae986b6-2a59-4358-a530-067256d2e6fe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.263587 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.343584 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-bound-sa-token\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.343635 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bae986b6-2a59-4358-a530-067256d2e6fe-trusted-ca\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.343666 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bae986b6-2a59-4358-a530-067256d2e6fe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.343695 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6pqm\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-kube-api-access-b6pqm\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.343727 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bae986b6-2a59-4358-a530-067256d2e6fe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.343782 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bae986b6-2a59-4358-a530-067256d2e6fe-registry-certificates\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.343802 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-registry-tls\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.345074 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bae986b6-2a59-4358-a530-067256d2e6fe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.345132 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bae986b6-2a59-4358-a530-067256d2e6fe-trusted-ca\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.346107 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bae986b6-2a59-4358-a530-067256d2e6fe-registry-certificates\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.349553 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-registry-tls\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.352292 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bae986b6-2a59-4358-a530-067256d2e6fe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.369857 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-bound-sa-token\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.375083 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6pqm\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-kube-api-access-b6pqm\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.376275 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.799557 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-f5m97"] Jan 30 08:36:04 crc kubenswrapper[4758]: W0130 08:36:04.803166 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbae986b6_2a59_4358_a530_067256d2e6fe.slice/crio-c8dbc8dacaf05c6351e58feb4d3f909c4c095110a7527514e4fa35b5db7cefc0 WatchSource:0}: Error finding container c8dbc8dacaf05c6351e58feb4d3f909c4c095110a7527514e4fa35b5db7cefc0: Status 404 returned error can't find the container with id c8dbc8dacaf05c6351e58feb4d3f909c4c095110a7527514e4fa35b5db7cefc0 Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.989117 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" event={"ID":"bae986b6-2a59-4358-a530-067256d2e6fe","Type":"ContainerStarted","Data":"e90f5c4fce400ee2fd1a610fcb261ff538276f6e4db2f7776c102a815fa2829d"} Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.989170 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" event={"ID":"bae986b6-2a59-4358-a530-067256d2e6fe","Type":"ContainerStarted","Data":"c8dbc8dacaf05c6351e58feb4d3f909c4c095110a7527514e4fa35b5db7cefc0"} Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.989284 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.037599 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" podStartSLOduration=18.03757812 podStartE2EDuration="18.03757812s" podCreationTimestamp="2026-01-30 08:36:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:36:05.009637146 +0000 UTC m=+369.981948697" watchObservedRunningTime="2026-01-30 08:36:22.03757812 +0000 UTC m=+387.009889691" Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.040445 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-jxkdj"] Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.040672 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" podUID="a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" containerName="controller-manager" containerID="cri-o://5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6" gracePeriod=30 Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.088480 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg"] Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.088774 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" podUID="0d487c06-79ec-4b43-9bea-781f8fb2ec82" containerName="route-controller-manager" containerID="cri-o://cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b" gracePeriod=30 Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.401348 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.401958 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.792534 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.912757 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-config\") pod \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.913126 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-client-ca\") pod \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.913175 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ct27\" (UniqueName: \"kubernetes.io/projected/0d487c06-79ec-4b43-9bea-781f8fb2ec82-kube-api-access-4ct27\") pod \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.913205 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d487c06-79ec-4b43-9bea-781f8fb2ec82-serving-cert\") pod \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.913751 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-client-ca" (OuterVolumeSpecName: "client-ca") pod "0d487c06-79ec-4b43-9bea-781f8fb2ec82" (UID: "0d487c06-79ec-4b43-9bea-781f8fb2ec82"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.914699 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-config" (OuterVolumeSpecName: "config") pod "0d487c06-79ec-4b43-9bea-781f8fb2ec82" (UID: "0d487c06-79ec-4b43-9bea-781f8fb2ec82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.923371 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d487c06-79ec-4b43-9bea-781f8fb2ec82-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0d487c06-79ec-4b43-9bea-781f8fb2ec82" (UID: "0d487c06-79ec-4b43-9bea-781f8fb2ec82"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.936412 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d487c06-79ec-4b43-9bea-781f8fb2ec82-kube-api-access-4ct27" (OuterVolumeSpecName: "kube-api-access-4ct27") pod "0d487c06-79ec-4b43-9bea-781f8fb2ec82" (UID: "0d487c06-79ec-4b43-9bea-781f8fb2ec82"). InnerVolumeSpecName "kube-api-access-4ct27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.987923 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.014843 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.014873 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.014912 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ct27\" (UniqueName: \"kubernetes.io/projected/0d487c06-79ec-4b43-9bea-781f8fb2ec82-kube-api-access-4ct27\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.014926 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d487c06-79ec-4b43-9bea-781f8fb2ec82-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.086577 4758 generic.go:334] "Generic (PLEG): container finished" podID="0d487c06-79ec-4b43-9bea-781f8fb2ec82" containerID="cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b" exitCode=0 Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.086629 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" event={"ID":"0d487c06-79ec-4b43-9bea-781f8fb2ec82","Type":"ContainerDied","Data":"cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b"} Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.086654 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" event={"ID":"0d487c06-79ec-4b43-9bea-781f8fb2ec82","Type":"ContainerDied","Data":"e71e8270854b48bce6c12435853f17ac3a3eb3dd31d274b31bb292c04c914ea2"} Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.086674 4758 scope.go:117] "RemoveContainer" containerID="cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.086758 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.094924 4758 generic.go:334] "Generic (PLEG): container finished" podID="a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" containerID="5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6" exitCode=0 Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.094965 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" event={"ID":"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5","Type":"ContainerDied","Data":"5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6"} Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.094995 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" event={"ID":"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5","Type":"ContainerDied","Data":"952f87f394f721284daffcec11f065bb01a614ab3053badb7c5d9e39e272998d"} Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.095016 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.112755 4758 scope.go:117] "RemoveContainer" containerID="cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.116305 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-proxy-ca-bundles\") pod \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.116387 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-serving-cert\") pod \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.116425 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-client-ca\") pod \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.116572 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-config\") pod \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.116620 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zjt2\" (UniqueName: \"kubernetes.io/projected/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-kube-api-access-2zjt2\") pod \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " Jan 30 08:36:23 crc kubenswrapper[4758]: E0130 08:36:23.117418 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b\": container with ID starting with cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b not found: ID does not exist" containerID="cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.117477 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b"} err="failed to get container status \"cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b\": rpc error: code = NotFound desc = could not find container \"cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b\": container with ID starting with cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b not found: ID does not exist" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.117507 4758 scope.go:117] "RemoveContainer" containerID="5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.117959 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-client-ca" (OuterVolumeSpecName: "client-ca") pod "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" (UID: "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.118230 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" (UID: "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.118794 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-config" (OuterVolumeSpecName: "config") pod "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" (UID: "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.121142 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg"] Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.121354 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" (UID: "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.123446 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-kube-api-access-2zjt2" (OuterVolumeSpecName: "kube-api-access-2zjt2") pod "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" (UID: "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5"). InnerVolumeSpecName "kube-api-access-2zjt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.132398 4758 scope.go:117] "RemoveContainer" containerID="5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6" Jan 30 08:36:23 crc kubenswrapper[4758]: E0130 08:36:23.132895 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6\": container with ID starting with 5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6 not found: ID does not exist" containerID="5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.132926 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6"} err="failed to get container status \"5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6\": rpc error: code = NotFound desc = could not find container \"5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6\": container with ID starting with 5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6 not found: ID does not exist" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.135596 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg"] Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.218331 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.218376 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zjt2\" (UniqueName: \"kubernetes.io/projected/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-kube-api-access-2zjt2\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.218389 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.218400 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.218414 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.423358 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-jxkdj"] Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.428497 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-jxkdj"] Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.775333 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d487c06-79ec-4b43-9bea-781f8fb2ec82" path="/var/lib/kubelet/pods/0d487c06-79ec-4b43-9bea-781f8fb2ec82/volumes" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.776139 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" path="/var/lib/kubelet/pods/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5/volumes" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.067443 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-868c8b59f8-722w2"] Jan 30 08:36:24 crc kubenswrapper[4758]: E0130 08:36:24.067798 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d487c06-79ec-4b43-9bea-781f8fb2ec82" containerName="route-controller-manager" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.067819 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d487c06-79ec-4b43-9bea-781f8fb2ec82" containerName="route-controller-manager" Jan 30 08:36:24 crc kubenswrapper[4758]: E0130 08:36:24.067834 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" containerName="controller-manager" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.067840 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" containerName="controller-manager" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.067949 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" containerName="controller-manager" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.067966 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d487c06-79ec-4b43-9bea-781f8fb2ec82" containerName="route-controller-manager" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.068387 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.071610 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng"] Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.072372 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.072421 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.072492 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.072652 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.072809 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.072914 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.073094 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.076128 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.076237 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.076467 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.076476 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.076592 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.076705 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.080109 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.089880 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng"] Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.090328 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-868c8b59f8-722w2"] Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.130474 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-serving-cert\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.130806 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e510388-48e8-4703-aa3e-ba6334438995-serving-cert\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.130905 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-client-ca\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.130962 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-proxy-ca-bundles\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.131090 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h6hc\" (UniqueName: \"kubernetes.io/projected/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-kube-api-access-4h6hc\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.131142 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-config\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.131168 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-client-ca\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.131193 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lblml\" (UniqueName: \"kubernetes.io/projected/8e510388-48e8-4703-aa3e-ba6334438995-kube-api-access-lblml\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.131268 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-config\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232605 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-serving-cert\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232648 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e510388-48e8-4703-aa3e-ba6334438995-serving-cert\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232679 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-client-ca\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232699 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-proxy-ca-bundles\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232734 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h6hc\" (UniqueName: \"kubernetes.io/projected/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-kube-api-access-4h6hc\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232754 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-config\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232771 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-client-ca\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232785 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lblml\" (UniqueName: \"kubernetes.io/projected/8e510388-48e8-4703-aa3e-ba6334438995-kube-api-access-lblml\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232812 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-config\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.234482 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-config\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.234963 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-client-ca\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.235069 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-proxy-ca-bundles\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.235703 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-config\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.235773 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-client-ca\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.241500 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e510388-48e8-4703-aa3e-ba6334438995-serving-cert\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.254630 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lblml\" (UniqueName: \"kubernetes.io/projected/8e510388-48e8-4703-aa3e-ba6334438995-kube-api-access-lblml\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.255968 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-serving-cert\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.257733 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h6hc\" (UniqueName: \"kubernetes.io/projected/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-kube-api-access-4h6hc\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.381116 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.386213 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.399560 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.470658 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fd88w"] Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.766150 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-868c8b59f8-722w2"] Jan 30 08:36:24 crc kubenswrapper[4758]: W0130 08:36:24.784263 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e510388_48e8_4703_aa3e_ba6334438995.slice/crio-f34ac511801f27f5aeb5425e6cc5d2f38dadf551fec16b5230fa738e9266c3b6 WatchSource:0}: Error finding container f34ac511801f27f5aeb5425e6cc5d2f38dadf551fec16b5230fa738e9266c3b6: Status 404 returned error can't find the container with id f34ac511801f27f5aeb5425e6cc5d2f38dadf551fec16b5230fa738e9266c3b6 Jan 30 08:36:25 crc kubenswrapper[4758]: I0130 08:36:25.101136 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng"] Jan 30 08:36:25 crc kubenswrapper[4758]: I0130 08:36:25.110177 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" event={"ID":"8e510388-48e8-4703-aa3e-ba6334438995","Type":"ContainerStarted","Data":"d3976be3ab28fb14d221ba7e20ade69478b6b39f2dee7f9a1a1f31487e53c343"} Jan 30 08:36:25 crc kubenswrapper[4758]: I0130 08:36:25.110214 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" event={"ID":"8e510388-48e8-4703-aa3e-ba6334438995","Type":"ContainerStarted","Data":"f34ac511801f27f5aeb5425e6cc5d2f38dadf551fec16b5230fa738e9266c3b6"} Jan 30 08:36:25 crc kubenswrapper[4758]: I0130 08:36:25.110750 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:25 crc kubenswrapper[4758]: I0130 08:36:25.114212 4758 patch_prober.go:28] interesting pod/controller-manager-868c8b59f8-722w2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 30 08:36:25 crc kubenswrapper[4758]: I0130 08:36:25.114274 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" podUID="8e510388-48e8-4703-aa3e-ba6334438995" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 30 08:36:25 crc kubenswrapper[4758]: I0130 08:36:25.789702 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" podStartSLOduration=3.789681592 podStartE2EDuration="3.789681592s" podCreationTimestamp="2026-01-30 08:36:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:36:25.139567509 +0000 UTC m=+390.111879070" watchObservedRunningTime="2026-01-30 08:36:25.789681592 +0000 UTC m=+390.761993143" Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.124434 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" event={"ID":"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32","Type":"ContainerStarted","Data":"ee0f472ad92e87130e178e4fcee8edd4fd503dd0dc0ba10bba366b42697113d4"} Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.124472 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" event={"ID":"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32","Type":"ContainerStarted","Data":"b976e67b99cb919ab002e1d190e682ba660f41dafbfadba4cb86c5f7224608ba"} Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.124991 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.130609 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.131019 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.147862 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" podStartSLOduration=4.147835108 podStartE2EDuration="4.147835108s" podCreationTimestamp="2026-01-30 08:36:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:36:26.137922209 +0000 UTC m=+391.110233780" watchObservedRunningTime="2026-01-30 08:36:26.147835108 +0000 UTC m=+391.120146669" Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.910555 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xpchq"] Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.911362 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xpchq" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerName="registry-server" containerID="cri-o://0b2ec1128ae0f87aa56272475ed8df830dea7aa074761913f0222440a559a3e5" gracePeriod=30 Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.929381 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fvrs2"] Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.929961 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fvrs2" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerName="registry-server" containerID="cri-o://b5c5b862b9f83415c1d77938c3409a7625341cb7402a46339bcd3712ede79206" gracePeriod=30 Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.939083 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7mbqg"] Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.939339 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" containerID="cri-o://9289bf34b4fd965a08ae479d84fd68bacbc2b99807df3ddbee8cd5dde015a76b" gracePeriod=30 Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.950240 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x78bc"] Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.950449 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x78bc" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" containerName="registry-server" containerID="cri-o://d35d25a07f88153e84e88d2a6f9ec5fe408fc2f157e85bba35a636862f0e3d16" gracePeriod=30 Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.962547 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lsfmf"] Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.963355 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.995168 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqdsb"] Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.995571 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zqdsb" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="registry-server" containerID="cri-o://f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929" gracePeriod=30 Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.037511 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lsfmf"] Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.073392 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f34a2860-1860-4032-8f5d-9278338c1b19-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.073641 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh2k4\" (UniqueName: \"kubernetes.io/projected/f34a2860-1860-4032-8f5d-9278338c1b19-kube-api-access-nh2k4\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.073785 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f34a2860-1860-4032-8f5d-9278338c1b19-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.163794 4758 generic.go:334] "Generic (PLEG): container finished" podID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerID="b5c5b862b9f83415c1d77938c3409a7625341cb7402a46339bcd3712ede79206" exitCode=0 Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.163869 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvrs2" event={"ID":"ee1dd099-7697-43e1-9777-b3735c8013e8","Type":"ContainerDied","Data":"b5c5b862b9f83415c1d77938c3409a7625341cb7402a46339bcd3712ede79206"} Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.175379 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f34a2860-1860-4032-8f5d-9278338c1b19-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.175437 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh2k4\" (UniqueName: \"kubernetes.io/projected/f34a2860-1860-4032-8f5d-9278338c1b19-kube-api-access-nh2k4\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.175483 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f34a2860-1860-4032-8f5d-9278338c1b19-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.177383 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f34a2860-1860-4032-8f5d-9278338c1b19-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.183537 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f34a2860-1860-4032-8f5d-9278338c1b19-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.192193 4758 generic.go:334] "Generic (PLEG): container finished" podID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerID="9289bf34b4fd965a08ae479d84fd68bacbc2b99807df3ddbee8cd5dde015a76b" exitCode=0 Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.192275 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" event={"ID":"49a66643-dc8c-4c84-9345-7f98676dc1d3","Type":"ContainerDied","Data":"9289bf34b4fd965a08ae479d84fd68bacbc2b99807df3ddbee8cd5dde015a76b"} Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.192307 4758 scope.go:117] "RemoveContainer" containerID="00e032b52468c18324073e8f68c53a2950d0c1e9e92eb63a8cddb5aaf6d5f40e" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.199874 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh2k4\" (UniqueName: \"kubernetes.io/projected/f34a2860-1860-4032-8f5d-9278338c1b19-kube-api-access-nh2k4\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.202618 4758 generic.go:334] "Generic (PLEG): container finished" podID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerID="0b2ec1128ae0f87aa56272475ed8df830dea7aa074761913f0222440a559a3e5" exitCode=0 Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.202698 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpchq" event={"ID":"ad53cc1a-5ef4-4e05-a996-b8e53194ef37","Type":"ContainerDied","Data":"0b2ec1128ae0f87aa56272475ed8df830dea7aa074761913f0222440a559a3e5"} Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.204554 4758 generic.go:334] "Generic (PLEG): container finished" podID="a8160a87-6f56-4d61-a17b-8049588a293b" containerID="d35d25a07f88153e84e88d2a6f9ec5fe408fc2f157e85bba35a636862f0e3d16" exitCode=0 Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.205525 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x78bc" event={"ID":"a8160a87-6f56-4d61-a17b-8049588a293b","Type":"ContainerDied","Data":"d35d25a07f88153e84e88d2a6f9ec5fe408fc2f157e85bba35a636862f0e3d16"} Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.292387 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.451030 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.586904 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-operator-metrics\") pod \"49a66643-dc8c-4c84-9345-7f98676dc1d3\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.586970 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-trusted-ca\") pod \"49a66643-dc8c-4c84-9345-7f98676dc1d3\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.587097 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97djv\" (UniqueName: \"kubernetes.io/projected/49a66643-dc8c-4c84-9345-7f98676dc1d3-kube-api-access-97djv\") pod \"49a66643-dc8c-4c84-9345-7f98676dc1d3\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.590061 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "49a66643-dc8c-4c84-9345-7f98676dc1d3" (UID: "49a66643-dc8c-4c84-9345-7f98676dc1d3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.590468 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "49a66643-dc8c-4c84-9345-7f98676dc1d3" (UID: "49a66643-dc8c-4c84-9345-7f98676dc1d3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.593107 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49a66643-dc8c-4c84-9345-7f98676dc1d3-kube-api-access-97djv" (OuterVolumeSpecName: "kube-api-access-97djv") pod "49a66643-dc8c-4c84-9345-7f98676dc1d3" (UID: "49a66643-dc8c-4c84-9345-7f98676dc1d3"). InnerVolumeSpecName "kube-api-access-97djv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.640570 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.689337 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-catalog-content\") pod \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.689564 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvflw\" (UniqueName: \"kubernetes.io/projected/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-kube-api-access-tvflw\") pod \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.689623 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-utilities\") pod \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.689863 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97djv\" (UniqueName: \"kubernetes.io/projected/49a66643-dc8c-4c84-9345-7f98676dc1d3-kube-api-access-97djv\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.689874 4758 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.689883 4758 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.690749 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-utilities" (OuterVolumeSpecName: "utilities") pod "ad53cc1a-5ef4-4e05-a996-b8e53194ef37" (UID: "ad53cc1a-5ef4-4e05-a996-b8e53194ef37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.692604 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-kube-api-access-tvflw" (OuterVolumeSpecName: "kube-api-access-tvflw") pod "ad53cc1a-5ef4-4e05-a996-b8e53194ef37" (UID: "ad53cc1a-5ef4-4e05-a996-b8e53194ef37"). InnerVolumeSpecName "kube-api-access-tvflw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.703760 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.718423 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.735968 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.760269 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad53cc1a-5ef4-4e05-a996-b8e53194ef37" (UID: "ad53cc1a-5ef4-4e05-a996-b8e53194ef37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.792762 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-catalog-content\") pod \"a8160a87-6f56-4d61-a17b-8049588a293b\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.793273 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdh94\" (UniqueName: \"kubernetes.io/projected/a8160a87-6f56-4d61-a17b-8049588a293b-kube-api-access-bdh94\") pod \"a8160a87-6f56-4d61-a17b-8049588a293b\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.794785 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-utilities\") pod \"ee1dd099-7697-43e1-9777-b3735c8013e8\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.794811 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-catalog-content\") pod \"9959a40e-acf9-4c57-967c-bbd102964dbb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.794842 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76w9h\" (UniqueName: \"kubernetes.io/projected/ee1dd099-7697-43e1-9777-b3735c8013e8-kube-api-access-76w9h\") pod \"ee1dd099-7697-43e1-9777-b3735c8013e8\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.794925 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz8l2\" (UniqueName: \"kubernetes.io/projected/9959a40e-acf9-4c57-967c-bbd102964dbb-kube-api-access-nz8l2\") pod \"9959a40e-acf9-4c57-967c-bbd102964dbb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.794954 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-utilities\") pod \"9959a40e-acf9-4c57-967c-bbd102964dbb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.794976 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-catalog-content\") pod \"ee1dd099-7697-43e1-9777-b3735c8013e8\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.794993 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-utilities\") pod \"a8160a87-6f56-4d61-a17b-8049588a293b\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.795278 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvflw\" (UniqueName: \"kubernetes.io/projected/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-kube-api-access-tvflw\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.795303 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.795316 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.799462 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-utilities" (OuterVolumeSpecName: "utilities") pod "a8160a87-6f56-4d61-a17b-8049588a293b" (UID: "a8160a87-6f56-4d61-a17b-8049588a293b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.800562 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-utilities" (OuterVolumeSpecName: "utilities") pod "ee1dd099-7697-43e1-9777-b3735c8013e8" (UID: "ee1dd099-7697-43e1-9777-b3735c8013e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.801161 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-utilities" (OuterVolumeSpecName: "utilities") pod "9959a40e-acf9-4c57-967c-bbd102964dbb" (UID: "9959a40e-acf9-4c57-967c-bbd102964dbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.801741 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8160a87-6f56-4d61-a17b-8049588a293b-kube-api-access-bdh94" (OuterVolumeSpecName: "kube-api-access-bdh94") pod "a8160a87-6f56-4d61-a17b-8049588a293b" (UID: "a8160a87-6f56-4d61-a17b-8049588a293b"). InnerVolumeSpecName "kube-api-access-bdh94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.803479 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9959a40e-acf9-4c57-967c-bbd102964dbb-kube-api-access-nz8l2" (OuterVolumeSpecName: "kube-api-access-nz8l2") pod "9959a40e-acf9-4c57-967c-bbd102964dbb" (UID: "9959a40e-acf9-4c57-967c-bbd102964dbb"). InnerVolumeSpecName "kube-api-access-nz8l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.804861 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee1dd099-7697-43e1-9777-b3735c8013e8-kube-api-access-76w9h" (OuterVolumeSpecName: "kube-api-access-76w9h") pod "ee1dd099-7697-43e1-9777-b3735c8013e8" (UID: "ee1dd099-7697-43e1-9777-b3735c8013e8"). InnerVolumeSpecName "kube-api-access-76w9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.824643 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8160a87-6f56-4d61-a17b-8049588a293b" (UID: "a8160a87-6f56-4d61-a17b-8049588a293b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.862229 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee1dd099-7697-43e1-9777-b3735c8013e8" (UID: "ee1dd099-7697-43e1-9777-b3735c8013e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.897973 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz8l2\" (UniqueName: \"kubernetes.io/projected/9959a40e-acf9-4c57-967c-bbd102964dbb-kube-api-access-nz8l2\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.898010 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.898026 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.898054 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.898067 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.898079 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdh94\" (UniqueName: \"kubernetes.io/projected/a8160a87-6f56-4d61-a17b-8049588a293b-kube-api-access-bdh94\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.898090 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.898101 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76w9h\" (UniqueName: \"kubernetes.io/projected/ee1dd099-7697-43e1-9777-b3735c8013e8-kube-api-access-76w9h\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.949833 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lsfmf"] Jan 30 08:36:27 crc kubenswrapper[4758]: W0130 08:36:27.960882 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf34a2860_1860_4032_8f5d_9278338c1b19.slice/crio-81100c90897287a3643ba9899c3a44cde7b2b8080091d934eb42cb93d73cf7ed WatchSource:0}: Error finding container 81100c90897287a3643ba9899c3a44cde7b2b8080091d934eb42cb93d73cf7ed: Status 404 returned error can't find the container with id 81100c90897287a3643ba9899c3a44cde7b2b8080091d934eb42cb93d73cf7ed Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.989975 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9959a40e-acf9-4c57-967c-bbd102964dbb" (UID: "9959a40e-acf9-4c57-967c-bbd102964dbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.999416 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.210350 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" event={"ID":"f34a2860-1860-4032-8f5d-9278338c1b19","Type":"ContainerStarted","Data":"2a87ec21ebcb6880dfa8aa72af6286e3bfe6ad41b3c7b5d697ee49a2369f7c7a"} Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.210389 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" event={"ID":"f34a2860-1860-4032-8f5d-9278338c1b19","Type":"ContainerStarted","Data":"81100c90897287a3643ba9899c3a44cde7b2b8080091d934eb42cb93d73cf7ed"} Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.210844 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.212023 4758 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lsfmf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.212074 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" podUID="f34a2860-1860-4032-8f5d-9278338c1b19" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.214307 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x78bc" event={"ID":"a8160a87-6f56-4d61-a17b-8049588a293b","Type":"ContainerDied","Data":"b94b087d70a109782aecba154dc0988599437def55360286dd53d57c7243892b"} Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.214359 4758 scope.go:117] "RemoveContainer" containerID="d35d25a07f88153e84e88d2a6f9ec5fe408fc2f157e85bba35a636862f0e3d16" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.214322 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.217529 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvrs2" event={"ID":"ee1dd099-7697-43e1-9777-b3735c8013e8","Type":"ContainerDied","Data":"0bb05ce1b23a66a7b1423906b5cec1cc9cdc27900f23c97851de922a49efcf1f"} Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.217591 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.219695 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" event={"ID":"49a66643-dc8c-4c84-9345-7f98676dc1d3","Type":"ContainerDied","Data":"4118893c53a7f8a706c32c2a7e6e6021db4c895afd2c8cfa46828333ae413f1b"} Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.219835 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.225076 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpchq" event={"ID":"ad53cc1a-5ef4-4e05-a996-b8e53194ef37","Type":"ContainerDied","Data":"7b1a949affe965a07631839d4c9eae6592dd927e8832c885c5dddac995cd6027"} Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.225177 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.231811 4758 generic.go:334] "Generic (PLEG): container finished" podID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerID="f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929" exitCode=0 Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.231872 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.232303 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqdsb" event={"ID":"9959a40e-acf9-4c57-967c-bbd102964dbb","Type":"ContainerDied","Data":"f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929"} Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.233196 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqdsb" event={"ID":"9959a40e-acf9-4c57-967c-bbd102964dbb","Type":"ContainerDied","Data":"2d31c2681539f71b539b22493850f84cbd075530d6103b3a81f2ee6e83bc8595"} Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.236970 4758 scope.go:117] "RemoveContainer" containerID="b6863769607a53efb0adc05b3d382f626fe18a9026de5f17efe5402505ed63f4" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.241733 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" podStartSLOduration=2.241712604 podStartE2EDuration="2.241712604s" podCreationTimestamp="2026-01-30 08:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:36:28.237572605 +0000 UTC m=+393.209884166" watchObservedRunningTime="2026-01-30 08:36:28.241712604 +0000 UTC m=+393.214024155" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.251600 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7mbqg"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.258238 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7mbqg"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.273436 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xpchq"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.276095 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xpchq"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.277595 4758 scope.go:117] "RemoveContainer" containerID="15ee256dfea5816a52c886c11283e7357d33ef2c840a69e7c266baa52c35e543" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.292536 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fvrs2"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.307142 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fvrs2"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.307511 4758 scope.go:117] "RemoveContainer" containerID="b5c5b862b9f83415c1d77938c3409a7625341cb7402a46339bcd3712ede79206" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.315642 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x78bc"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.319541 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x78bc"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.323077 4758 scope.go:117] "RemoveContainer" containerID="c3f68acbef392b308c2ed3b35e3acc112673547e011fad5471bd97a0db59032a" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.334767 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqdsb"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.337867 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zqdsb"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.341154 4758 scope.go:117] "RemoveContainer" containerID="4fd34a8816ab9b8ac36f3ee7f4fccd351c108f7b629f6428e022613e9b4c1cfd" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.355064 4758 scope.go:117] "RemoveContainer" containerID="9289bf34b4fd965a08ae479d84fd68bacbc2b99807df3ddbee8cd5dde015a76b" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.373396 4758 scope.go:117] "RemoveContainer" containerID="0b2ec1128ae0f87aa56272475ed8df830dea7aa074761913f0222440a559a3e5" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.388988 4758 scope.go:117] "RemoveContainer" containerID="b5bdd4b46ac12fea35ffe9c6fead48f31f038e946902c61ae00ea7909af021e0" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.408251 4758 scope.go:117] "RemoveContainer" containerID="dab894bc290593f6bf4efa3b73a1fcf4864265047195960150260493f5158b78" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.429817 4758 scope.go:117] "RemoveContainer" containerID="f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.446119 4758 scope.go:117] "RemoveContainer" containerID="fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.472975 4758 scope.go:117] "RemoveContainer" containerID="08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.490557 4758 scope.go:117] "RemoveContainer" containerID="f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929" Jan 30 08:36:28 crc kubenswrapper[4758]: E0130 08:36:28.491067 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929\": container with ID starting with f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929 not found: ID does not exist" containerID="f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.491128 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929"} err="failed to get container status \"f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929\": rpc error: code = NotFound desc = could not find container \"f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929\": container with ID starting with f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929 not found: ID does not exist" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.491164 4758 scope.go:117] "RemoveContainer" containerID="fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705" Jan 30 08:36:28 crc kubenswrapper[4758]: E0130 08:36:28.491718 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705\": container with ID starting with fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705 not found: ID does not exist" containerID="fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.491757 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705"} err="failed to get container status \"fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705\": rpc error: code = NotFound desc = could not find container \"fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705\": container with ID starting with fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705 not found: ID does not exist" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.491781 4758 scope.go:117] "RemoveContainer" containerID="08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601" Jan 30 08:36:28 crc kubenswrapper[4758]: E0130 08:36:28.492097 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601\": container with ID starting with 08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601 not found: ID does not exist" containerID="08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.492129 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601"} err="failed to get container status \"08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601\": rpc error: code = NotFound desc = could not find container \"08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601\": container with ID starting with 08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601 not found: ID does not exist" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.245408 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.736952 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7tq2j"] Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.737518 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" containerName="extract-content" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.737625 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" containerName="extract-content" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.737715 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerName="extract-content" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.737794 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerName="extract-content" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.737873 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.737950 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.738061 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.738153 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.738244 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="extract-utilities" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.738321 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="extract-utilities" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.738400 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.738475 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.738556 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.738640 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.738730 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.738814 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.738895 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" containerName="extract-utilities" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.738965 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" containerName="extract-utilities" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.739034 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="extract-content" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.739121 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="extract-content" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.739198 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerName="extract-content" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.739273 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerName="extract-content" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.739345 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerName="extract-utilities" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.739414 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerName="extract-utilities" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.739496 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerName="extract-utilities" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.739580 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerName="extract-utilities" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.739664 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.739746 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.739929 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.740019 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.740118 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.740292 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.740438 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.740525 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.741537 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.744683 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.778557 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" path="/var/lib/kubelet/pods/49a66643-dc8c-4c84-9345-7f98676dc1d3/volumes" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.779171 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" path="/var/lib/kubelet/pods/9959a40e-acf9-4c57-967c-bbd102964dbb/volumes" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.779872 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" path="/var/lib/kubelet/pods/a8160a87-6f56-4d61-a17b-8049588a293b/volumes" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.781095 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" path="/var/lib/kubelet/pods/ad53cc1a-5ef4-4e05-a996-b8e53194ef37/volumes" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.781802 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" path="/var/lib/kubelet/pods/ee1dd099-7697-43e1-9777-b3735c8013e8/volumes" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.808914 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7tq2j"] Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.822007 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p77qv\" (UniqueName: \"kubernetes.io/projected/73f8c779-64cc-4d7d-8762-4f8cf1611071-kube-api-access-p77qv\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.822159 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73f8c779-64cc-4d7d-8762-4f8cf1611071-catalog-content\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.822187 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73f8c779-64cc-4d7d-8762-4f8cf1611071-utilities\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.922957 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p77qv\" (UniqueName: \"kubernetes.io/projected/73f8c779-64cc-4d7d-8762-4f8cf1611071-kube-api-access-p77qv\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.923230 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73f8c779-64cc-4d7d-8762-4f8cf1611071-catalog-content\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.923348 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73f8c779-64cc-4d7d-8762-4f8cf1611071-utilities\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.923724 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73f8c779-64cc-4d7d-8762-4f8cf1611071-catalog-content\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.923784 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73f8c779-64cc-4d7d-8762-4f8cf1611071-utilities\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.939450 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p77qv\" (UniqueName: \"kubernetes.io/projected/73f8c779-64cc-4d7d-8762-4f8cf1611071-kube-api-access-p77qv\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.054901 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.502849 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7tq2j"] Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.725173 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fmg9d"] Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.729925 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.735157 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.741832 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fmg9d"] Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.847800 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f82fd\" (UniqueName: \"kubernetes.io/projected/8404e227-68e2-4686-a04d-00048ba303ec-kube-api-access-f82fd\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.847899 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8404e227-68e2-4686-a04d-00048ba303ec-catalog-content\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.847932 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8404e227-68e2-4686-a04d-00048ba303ec-utilities\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.949332 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8404e227-68e2-4686-a04d-00048ba303ec-catalog-content\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.949386 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8404e227-68e2-4686-a04d-00048ba303ec-utilities\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.949424 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f82fd\" (UniqueName: \"kubernetes.io/projected/8404e227-68e2-4686-a04d-00048ba303ec-kube-api-access-f82fd\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.950092 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8404e227-68e2-4686-a04d-00048ba303ec-catalog-content\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.950294 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8404e227-68e2-4686-a04d-00048ba303ec-utilities\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.967149 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f82fd\" (UniqueName: \"kubernetes.io/projected/8404e227-68e2-4686-a04d-00048ba303ec-kube-api-access-f82fd\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:31 crc kubenswrapper[4758]: I0130 08:36:31.051144 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:31 crc kubenswrapper[4758]: I0130 08:36:31.255912 4758 generic.go:334] "Generic (PLEG): container finished" podID="73f8c779-64cc-4d7d-8762-4f8cf1611071" containerID="f8ffe2f184595707503d9582ebebbbf88874d1a789820fcaf5af3b94d7c0101b" exitCode=0 Jan 30 08:36:31 crc kubenswrapper[4758]: I0130 08:36:31.255952 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tq2j" event={"ID":"73f8c779-64cc-4d7d-8762-4f8cf1611071","Type":"ContainerDied","Data":"f8ffe2f184595707503d9582ebebbbf88874d1a789820fcaf5af3b94d7c0101b"} Jan 30 08:36:31 crc kubenswrapper[4758]: I0130 08:36:31.255984 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tq2j" event={"ID":"73f8c779-64cc-4d7d-8762-4f8cf1611071","Type":"ContainerStarted","Data":"de321136531ad85f4df6f150b06b26efab30414dd9bf3b1dc457601760e55716"} Jan 30 08:36:31 crc kubenswrapper[4758]: I0130 08:36:31.464151 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fmg9d"] Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.133650 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h6v7r"] Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.135484 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.142653 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.149282 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6v7r"] Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.262957 4758 generic.go:334] "Generic (PLEG): container finished" podID="73f8c779-64cc-4d7d-8762-4f8cf1611071" containerID="369097a25de3facae9c45c855cc0128200e681604ff8b7e66670def5606597b7" exitCode=0 Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.263125 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tq2j" event={"ID":"73f8c779-64cc-4d7d-8762-4f8cf1611071","Type":"ContainerDied","Data":"369097a25de3facae9c45c855cc0128200e681604ff8b7e66670def5606597b7"} Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.264807 4758 generic.go:334] "Generic (PLEG): container finished" podID="8404e227-68e2-4686-a04d-00048ba303ec" containerID="c348f10a599c9eddf2e5183bbd4b31ec6c663a2d59df5538736c103ada74044f" exitCode=0 Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.264867 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmg9d" event={"ID":"8404e227-68e2-4686-a04d-00048ba303ec","Type":"ContainerDied","Data":"c348f10a599c9eddf2e5183bbd4b31ec6c663a2d59df5538736c103ada74044f"} Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.264933 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmg9d" event={"ID":"8404e227-68e2-4686-a04d-00048ba303ec","Type":"ContainerStarted","Data":"d4055b4f3028ac245c2cf8d96e568bc6013013eda34858f6f0bf4b80d703242b"} Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.284702 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76812ff6-8f58-4c5c-8606-3cc8f949146e-catalog-content\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.284798 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76812ff6-8f58-4c5c-8606-3cc8f949146e-utilities\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.284839 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jqkw\" (UniqueName: \"kubernetes.io/projected/76812ff6-8f58-4c5c-8606-3cc8f949146e-kube-api-access-4jqkw\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.386235 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76812ff6-8f58-4c5c-8606-3cc8f949146e-utilities\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.386307 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jqkw\" (UniqueName: \"kubernetes.io/projected/76812ff6-8f58-4c5c-8606-3cc8f949146e-kube-api-access-4jqkw\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.386382 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76812ff6-8f58-4c5c-8606-3cc8f949146e-catalog-content\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.387744 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76812ff6-8f58-4c5c-8606-3cc8f949146e-catalog-content\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.387999 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76812ff6-8f58-4c5c-8606-3cc8f949146e-utilities\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.408128 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jqkw\" (UniqueName: \"kubernetes.io/projected/76812ff6-8f58-4c5c-8606-3cc8f949146e-kube-api-access-4jqkw\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.494522 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.939728 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6v7r"] Jan 30 08:36:32 crc kubenswrapper[4758]: W0130 08:36:32.970086 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76812ff6_8f58_4c5c_8606_3cc8f949146e.slice/crio-5228bc1d9d8cc1aec8c3605c20defc06efab86c38ffbd240f56967e33a044bb9 WatchSource:0}: Error finding container 5228bc1d9d8cc1aec8c3605c20defc06efab86c38ffbd240f56967e33a044bb9: Status 404 returned error can't find the container with id 5228bc1d9d8cc1aec8c3605c20defc06efab86c38ffbd240f56967e33a044bb9 Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.129848 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kb7fp"] Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.131111 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.137855 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.146897 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb7fp"] Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.200352 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgsd8\" (UniqueName: \"kubernetes.io/projected/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-kube-api-access-vgsd8\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.200495 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-utilities\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.200543 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-catalog-content\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.270243 4758 generic.go:334] "Generic (PLEG): container finished" podID="76812ff6-8f58-4c5c-8606-3cc8f949146e" containerID="21d836d2bd6a23ce1382948dd88b8d83c12e6454681320a0bf47bec815075963" exitCode=0 Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.270397 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6v7r" event={"ID":"76812ff6-8f58-4c5c-8606-3cc8f949146e","Type":"ContainerDied","Data":"21d836d2bd6a23ce1382948dd88b8d83c12e6454681320a0bf47bec815075963"} Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.270642 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6v7r" event={"ID":"76812ff6-8f58-4c5c-8606-3cc8f949146e","Type":"ContainerStarted","Data":"5228bc1d9d8cc1aec8c3605c20defc06efab86c38ffbd240f56967e33a044bb9"} Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.272954 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmg9d" event={"ID":"8404e227-68e2-4686-a04d-00048ba303ec","Type":"ContainerStarted","Data":"38945fb8c984b8a4edc68ba9077681ce4168f5a1108149721489c04b07874e8f"} Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.277325 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tq2j" event={"ID":"73f8c779-64cc-4d7d-8762-4f8cf1611071","Type":"ContainerStarted","Data":"3258795c1de5b14247ec58cd933008d63c35f76eb411f28fce654c91c4c49f18"} Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.301777 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-catalog-content\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.302137 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgsd8\" (UniqueName: \"kubernetes.io/projected/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-kube-api-access-vgsd8\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.302297 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-utilities\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.302799 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-utilities\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.303145 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-catalog-content\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.324897 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7tq2j" podStartSLOduration=2.902215901 podStartE2EDuration="4.324878254s" podCreationTimestamp="2026-01-30 08:36:29 +0000 UTC" firstStartedPulling="2026-01-30 08:36:31.257444793 +0000 UTC m=+396.229756344" lastFinishedPulling="2026-01-30 08:36:32.680107146 +0000 UTC m=+397.652418697" observedRunningTime="2026-01-30 08:36:33.321804828 +0000 UTC m=+398.294116389" watchObservedRunningTime="2026-01-30 08:36:33.324878254 +0000 UTC m=+398.297189805" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.325017 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgsd8\" (UniqueName: \"kubernetes.io/projected/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-kube-api-access-vgsd8\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.450421 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.952187 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb7fp"] Jan 30 08:36:33 crc kubenswrapper[4758]: W0130 08:36:33.964213 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb6d53d3_8c72_4f16_9bf1_f196d3c85e3a.slice/crio-9ce8d20071920a1e52514de2715f7a8bb578ce469617b483637e0d430fac25dc WatchSource:0}: Error finding container 9ce8d20071920a1e52514de2715f7a8bb578ce469617b483637e0d430fac25dc: Status 404 returned error can't find the container with id 9ce8d20071920a1e52514de2715f7a8bb578ce469617b483637e0d430fac25dc Jan 30 08:36:34 crc kubenswrapper[4758]: I0130 08:36:34.283950 4758 generic.go:334] "Generic (PLEG): container finished" podID="76812ff6-8f58-4c5c-8606-3cc8f949146e" containerID="7e0ba33626c93259522a7336776f942e30af68d09e32a4f51adb4fc2c123cfa4" exitCode=0 Jan 30 08:36:34 crc kubenswrapper[4758]: I0130 08:36:34.284065 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6v7r" event={"ID":"76812ff6-8f58-4c5c-8606-3cc8f949146e","Type":"ContainerDied","Data":"7e0ba33626c93259522a7336776f942e30af68d09e32a4f51adb4fc2c123cfa4"} Jan 30 08:36:34 crc kubenswrapper[4758]: I0130 08:36:34.286130 4758 generic.go:334] "Generic (PLEG): container finished" podID="8404e227-68e2-4686-a04d-00048ba303ec" containerID="38945fb8c984b8a4edc68ba9077681ce4168f5a1108149721489c04b07874e8f" exitCode=0 Jan 30 08:36:34 crc kubenswrapper[4758]: I0130 08:36:34.286404 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmg9d" event={"ID":"8404e227-68e2-4686-a04d-00048ba303ec","Type":"ContainerDied","Data":"38945fb8c984b8a4edc68ba9077681ce4168f5a1108149721489c04b07874e8f"} Jan 30 08:36:34 crc kubenswrapper[4758]: I0130 08:36:34.290693 4758 generic.go:334] "Generic (PLEG): container finished" podID="db6d53d3-8c72-4f16-9bf1-f196d3c85e3a" containerID="9dcde49a044f38586604c79fa1c732ff68334f7617a8d18dc29f782b45915718" exitCode=0 Jan 30 08:36:34 crc kubenswrapper[4758]: I0130 08:36:34.290829 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb7fp" event={"ID":"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a","Type":"ContainerDied","Data":"9dcde49a044f38586604c79fa1c732ff68334f7617a8d18dc29f782b45915718"} Jan 30 08:36:34 crc kubenswrapper[4758]: I0130 08:36:34.290902 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb7fp" event={"ID":"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a","Type":"ContainerStarted","Data":"9ce8d20071920a1e52514de2715f7a8bb578ce469617b483637e0d430fac25dc"} Jan 30 08:36:35 crc kubenswrapper[4758]: I0130 08:36:35.297117 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmg9d" event={"ID":"8404e227-68e2-4686-a04d-00048ba303ec","Type":"ContainerStarted","Data":"50fdb5f1c67782a83c6840bc3083ecf987283ccf213b1ae429faa56af078d857"} Jan 30 08:36:35 crc kubenswrapper[4758]: I0130 08:36:35.299732 4758 generic.go:334] "Generic (PLEG): container finished" podID="db6d53d3-8c72-4f16-9bf1-f196d3c85e3a" containerID="297fb77668c5725bee55b7909c07ba5e9b3e426383b1d52c070af4bb329f9e93" exitCode=0 Jan 30 08:36:35 crc kubenswrapper[4758]: I0130 08:36:35.299863 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb7fp" event={"ID":"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a","Type":"ContainerDied","Data":"297fb77668c5725bee55b7909c07ba5e9b3e426383b1d52c070af4bb329f9e93"} Jan 30 08:36:35 crc kubenswrapper[4758]: I0130 08:36:35.303610 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6v7r" event={"ID":"76812ff6-8f58-4c5c-8606-3cc8f949146e","Type":"ContainerStarted","Data":"e681855d47292914569cf94fea16ebd2c6e82d35ca52f25ec4071606ca0f2baf"} Jan 30 08:36:35 crc kubenswrapper[4758]: I0130 08:36:35.323782 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fmg9d" podStartSLOduration=2.597295719 podStartE2EDuration="5.323765103s" podCreationTimestamp="2026-01-30 08:36:30 +0000 UTC" firstStartedPulling="2026-01-30 08:36:32.266447336 +0000 UTC m=+397.238758887" lastFinishedPulling="2026-01-30 08:36:34.99291672 +0000 UTC m=+399.965228271" observedRunningTime="2026-01-30 08:36:35.319580432 +0000 UTC m=+400.291891993" watchObservedRunningTime="2026-01-30 08:36:35.323765103 +0000 UTC m=+400.296076654" Jan 30 08:36:36 crc kubenswrapper[4758]: I0130 08:36:36.314367 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb7fp" event={"ID":"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a","Type":"ContainerStarted","Data":"7fc28c76864872ae095210db0bb098b7be187468b5278cc847c4b4f303cf18f6"} Jan 30 08:36:36 crc kubenswrapper[4758]: I0130 08:36:36.334878 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h6v7r" podStartSLOduration=2.945673814 podStartE2EDuration="4.334864402s" podCreationTimestamp="2026-01-30 08:36:32 +0000 UTC" firstStartedPulling="2026-01-30 08:36:33.27292478 +0000 UTC m=+398.245236331" lastFinishedPulling="2026-01-30 08:36:34.662115368 +0000 UTC m=+399.634426919" observedRunningTime="2026-01-30 08:36:35.366777186 +0000 UTC m=+400.339088737" watchObservedRunningTime="2026-01-30 08:36:36.334864402 +0000 UTC m=+401.307175953" Jan 30 08:36:36 crc kubenswrapper[4758]: I0130 08:36:36.336204 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kb7fp" podStartSLOduration=1.9451249179999999 podStartE2EDuration="3.336199914s" podCreationTimestamp="2026-01-30 08:36:33 +0000 UTC" firstStartedPulling="2026-01-30 08:36:34.294104115 +0000 UTC m=+399.266415656" lastFinishedPulling="2026-01-30 08:36:35.685179091 +0000 UTC m=+400.657490652" observedRunningTime="2026-01-30 08:36:36.333712446 +0000 UTC m=+401.306024007" watchObservedRunningTime="2026-01-30 08:36:36.336199914 +0000 UTC m=+401.308511465" Jan 30 08:36:40 crc kubenswrapper[4758]: I0130 08:36:40.056200 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:40 crc kubenswrapper[4758]: I0130 08:36:40.056730 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:40 crc kubenswrapper[4758]: I0130 08:36:40.103313 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:40 crc kubenswrapper[4758]: I0130 08:36:40.388105 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:41 crc kubenswrapper[4758]: I0130 08:36:41.051558 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:41 crc kubenswrapper[4758]: I0130 08:36:41.051610 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:41 crc kubenswrapper[4758]: I0130 08:36:41.087622 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:41 crc kubenswrapper[4758]: I0130 08:36:41.419777 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:42 crc kubenswrapper[4758]: I0130 08:36:42.495430 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:42 crc kubenswrapper[4758]: I0130 08:36:42.495507 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:42 crc kubenswrapper[4758]: I0130 08:36:42.537639 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:43 crc kubenswrapper[4758]: I0130 08:36:43.402020 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:43 crc kubenswrapper[4758]: I0130 08:36:43.451655 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:43 crc kubenswrapper[4758]: I0130 08:36:43.451709 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:43 crc kubenswrapper[4758]: I0130 08:36:43.495216 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:44 crc kubenswrapper[4758]: I0130 08:36:44.434724 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:49 crc kubenswrapper[4758]: I0130 08:36:49.595520 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" podUID="467f100f-83e4-43b0-bcf0-16cfe7cb0393" containerName="registry" containerID="cri-o://1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b" gracePeriod=30 Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.116390 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.238957 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-trusted-ca\") pod \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.239019 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/467f100f-83e4-43b0-bcf0-16cfe7cb0393-installation-pull-secrets\") pod \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.239058 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-bound-sa-token\") pod \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.239143 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/467f100f-83e4-43b0-bcf0-16cfe7cb0393-ca-trust-extracted\") pod \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.240160 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-certificates\") pod \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.240343 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.240513 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "467f100f-83e4-43b0-bcf0-16cfe7cb0393" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.241340 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-tls\") pod \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.241383 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn5vx\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-kube-api-access-pn5vx\") pod \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.241580 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "467f100f-83e4-43b0-bcf0-16cfe7cb0393" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.241926 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.241962 4758 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.249696 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "467f100f-83e4-43b0-bcf0-16cfe7cb0393" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.255183 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467f100f-83e4-43b0-bcf0-16cfe7cb0393-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "467f100f-83e4-43b0-bcf0-16cfe7cb0393" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.261596 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "467f100f-83e4-43b0-bcf0-16cfe7cb0393" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.262022 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-kube-api-access-pn5vx" (OuterVolumeSpecName: "kube-api-access-pn5vx") pod "467f100f-83e4-43b0-bcf0-16cfe7cb0393" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393"). InnerVolumeSpecName "kube-api-access-pn5vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.265332 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/467f100f-83e4-43b0-bcf0-16cfe7cb0393-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "467f100f-83e4-43b0-bcf0-16cfe7cb0393" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.269253 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "467f100f-83e4-43b0-bcf0-16cfe7cb0393" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.343611 4758 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/467f100f-83e4-43b0-bcf0-16cfe7cb0393-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.344062 4758 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.344188 4758 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/467f100f-83e4-43b0-bcf0-16cfe7cb0393-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.344263 4758 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.344349 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn5vx\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-kube-api-access-pn5vx\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.417149 4758 generic.go:334] "Generic (PLEG): container finished" podID="467f100f-83e4-43b0-bcf0-16cfe7cb0393" containerID="1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b" exitCode=0 Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.417249 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" event={"ID":"467f100f-83e4-43b0-bcf0-16cfe7cb0393","Type":"ContainerDied","Data":"1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b"} Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.418233 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" event={"ID":"467f100f-83e4-43b0-bcf0-16cfe7cb0393","Type":"ContainerDied","Data":"cf6f00a97d6458a9fc656f00daeea6735f190156c2f34abea4396149d2c34aee"} Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.417332 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.418284 4758 scope.go:117] "RemoveContainer" containerID="1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.440167 4758 scope.go:117] "RemoveContainer" containerID="1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b" Jan 30 08:36:50 crc kubenswrapper[4758]: E0130 08:36:50.440754 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b\": container with ID starting with 1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b not found: ID does not exist" containerID="1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.440802 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b"} err="failed to get container status \"1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b\": rpc error: code = NotFound desc = could not find container \"1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b\": container with ID starting with 1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b not found: ID does not exist" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.456215 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fd88w"] Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.462577 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fd88w"] Jan 30 08:36:51 crc kubenswrapper[4758]: I0130 08:36:51.780715 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="467f100f-83e4-43b0-bcf0-16cfe7cb0393" path="/var/lib/kubelet/pods/467f100f-83e4-43b0-bcf0-16cfe7cb0393/volumes" Jan 30 08:36:52 crc kubenswrapper[4758]: I0130 08:36:52.387405 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:36:52 crc kubenswrapper[4758]: I0130 08:36:52.387731 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:36:52 crc kubenswrapper[4758]: I0130 08:36:52.387778 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:36:52 crc kubenswrapper[4758]: I0130 08:36:52.388405 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6250826afc1a7b9f8792f6e6a0fcf6f685f39c7f2ff42574fdb4511420aeb1d1"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:36:52 crc kubenswrapper[4758]: I0130 08:36:52.388467 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://6250826afc1a7b9f8792f6e6a0fcf6f685f39c7f2ff42574fdb4511420aeb1d1" gracePeriod=600 Jan 30 08:36:53 crc kubenswrapper[4758]: I0130 08:36:53.437636 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="6250826afc1a7b9f8792f6e6a0fcf6f685f39c7f2ff42574fdb4511420aeb1d1" exitCode=0 Jan 30 08:36:53 crc kubenswrapper[4758]: I0130 08:36:53.437921 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"6250826afc1a7b9f8792f6e6a0fcf6f685f39c7f2ff42574fdb4511420aeb1d1"} Jan 30 08:36:53 crc kubenswrapper[4758]: I0130 08:36:53.437946 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"12e346bef216c1af5460ebc9181d8141b01cef9a0c0f222c452a8bead4bdf6f9"} Jan 30 08:36:53 crc kubenswrapper[4758]: I0130 08:36:53.437964 4758 scope.go:117] "RemoveContainer" containerID="1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916" Jan 30 08:38:52 crc kubenswrapper[4758]: I0130 08:38:52.388032 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:38:52 crc kubenswrapper[4758]: I0130 08:38:52.388826 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:39:22 crc kubenswrapper[4758]: I0130 08:39:22.386945 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:39:22 crc kubenswrapper[4758]: I0130 08:39:22.387377 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:39:52 crc kubenswrapper[4758]: I0130 08:39:52.387109 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:39:52 crc kubenswrapper[4758]: I0130 08:39:52.387709 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:39:52 crc kubenswrapper[4758]: I0130 08:39:52.387759 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:39:52 crc kubenswrapper[4758]: I0130 08:39:52.388353 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"12e346bef216c1af5460ebc9181d8141b01cef9a0c0f222c452a8bead4bdf6f9"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:39:52 crc kubenswrapper[4758]: I0130 08:39:52.388411 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://12e346bef216c1af5460ebc9181d8141b01cef9a0c0f222c452a8bead4bdf6f9" gracePeriod=600 Jan 30 08:39:52 crc kubenswrapper[4758]: E0130 08:39:52.423234 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95cfcde3_10c8_4ece_a78a_9508f04a0f09.slice/crio-12e346bef216c1af5460ebc9181d8141b01cef9a0c0f222c452a8bead4bdf6f9.scope\": RecentStats: unable to find data in memory cache]" Jan 30 08:39:53 crc kubenswrapper[4758]: I0130 08:39:53.362855 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="12e346bef216c1af5460ebc9181d8141b01cef9a0c0f222c452a8bead4bdf6f9" exitCode=0 Jan 30 08:39:53 crc kubenswrapper[4758]: I0130 08:39:53.362929 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"12e346bef216c1af5460ebc9181d8141b01cef9a0c0f222c452a8bead4bdf6f9"} Jan 30 08:39:53 crc kubenswrapper[4758]: I0130 08:39:53.363215 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"deb29cef8b897137735d2465c08026013481066ac7d08e4c83f4d9efbbed9a89"} Jan 30 08:39:53 crc kubenswrapper[4758]: I0130 08:39:53.363238 4758 scope.go:117] "RemoveContainer" containerID="6250826afc1a7b9f8792f6e6a0fcf6f685f39c7f2ff42574fdb4511420aeb1d1" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.818424 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k"] Jan 30 08:41:49 crc kubenswrapper[4758]: E0130 08:41:49.819159 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467f100f-83e4-43b0-bcf0-16cfe7cb0393" containerName="registry" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.819172 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="467f100f-83e4-43b0-bcf0-16cfe7cb0393" containerName="registry" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.819272 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="467f100f-83e4-43b0-bcf0-16cfe7cb0393" containerName="registry" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.819646 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.823442 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.823759 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.823875 4758 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-bkp4x" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.834316 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-gzhzw"] Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.835021 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gzhzw" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.838745 4758 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hdm2h" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.859262 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xrm7r"] Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.859994 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.863452 4758 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-b4tfv" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.865509 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k"] Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.882421 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gzhzw"] Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.889896 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjqxb\" (UniqueName: \"kubernetes.io/projected/3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb-kube-api-access-mjqxb\") pod \"cert-manager-858654f9db-gzhzw\" (UID: \"3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb\") " pod="cert-manager/cert-manager-858654f9db-gzhzw" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.889999 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbzkc\" (UniqueName: \"kubernetes.io/projected/738fc587-0a87-41b5-b2b0-690fa92d754e-kube-api-access-qbzkc\") pod \"cert-manager-cainjector-cf98fcc89-fpg8k\" (UID: \"738fc587-0a87-41b5-b2b0-690fa92d754e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.896627 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xrm7r"] Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.991804 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcvvj\" (UniqueName: \"kubernetes.io/projected/b55a68e5-f198-4525-9d07-2acdcb906d36-kube-api-access-kcvvj\") pod \"cert-manager-webhook-687f57d79b-xrm7r\" (UID: \"b55a68e5-f198-4525-9d07-2acdcb906d36\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.991896 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjqxb\" (UniqueName: \"kubernetes.io/projected/3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb-kube-api-access-mjqxb\") pod \"cert-manager-858654f9db-gzhzw\" (UID: \"3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb\") " pod="cert-manager/cert-manager-858654f9db-gzhzw" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.991938 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbzkc\" (UniqueName: \"kubernetes.io/projected/738fc587-0a87-41b5-b2b0-690fa92d754e-kube-api-access-qbzkc\") pod \"cert-manager-cainjector-cf98fcc89-fpg8k\" (UID: \"738fc587-0a87-41b5-b2b0-690fa92d754e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k" Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.025849 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjqxb\" (UniqueName: \"kubernetes.io/projected/3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb-kube-api-access-mjqxb\") pod \"cert-manager-858654f9db-gzhzw\" (UID: \"3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb\") " pod="cert-manager/cert-manager-858654f9db-gzhzw" Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.025883 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbzkc\" (UniqueName: \"kubernetes.io/projected/738fc587-0a87-41b5-b2b0-690fa92d754e-kube-api-access-qbzkc\") pod \"cert-manager-cainjector-cf98fcc89-fpg8k\" (UID: \"738fc587-0a87-41b5-b2b0-690fa92d754e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k" Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.093625 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcvvj\" (UniqueName: \"kubernetes.io/projected/b55a68e5-f198-4525-9d07-2acdcb906d36-kube-api-access-kcvvj\") pod \"cert-manager-webhook-687f57d79b-xrm7r\" (UID: \"b55a68e5-f198-4525-9d07-2acdcb906d36\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.111877 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcvvj\" (UniqueName: \"kubernetes.io/projected/b55a68e5-f198-4525-9d07-2acdcb906d36-kube-api-access-kcvvj\") pod \"cert-manager-webhook-687f57d79b-xrm7r\" (UID: \"b55a68e5-f198-4525-9d07-2acdcb906d36\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.143439 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k" Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.162437 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gzhzw" Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.183076 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.432505 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xrm7r"] Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.439120 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.502223 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" event={"ID":"b55a68e5-f198-4525-9d07-2acdcb906d36","Type":"ContainerStarted","Data":"c17ae69228d5790e28ec360f328c86fb5c6b0aee926311ee50cabaae71bbab50"} Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.694005 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k"] Jan 30 08:41:50 crc kubenswrapper[4758]: W0130 08:41:50.699216 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod738fc587_0a87_41b5_b2b0_690fa92d754e.slice/crio-b7ef983c9536c1f6451ea69d63a6281964a5465fa0848979581d9f65b7c88e14 WatchSource:0}: Error finding container b7ef983c9536c1f6451ea69d63a6281964a5465fa0848979581d9f65b7c88e14: Status 404 returned error can't find the container with id b7ef983c9536c1f6451ea69d63a6281964a5465fa0848979581d9f65b7c88e14 Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.706009 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gzhzw"] Jan 30 08:41:51 crc kubenswrapper[4758]: I0130 08:41:51.513307 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gzhzw" event={"ID":"3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb","Type":"ContainerStarted","Data":"4dfb8f90ca285635cf40a8dd2396665461fab51bd5d515cde87c62f01c695018"} Jan 30 08:41:51 crc kubenswrapper[4758]: I0130 08:41:51.519670 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k" event={"ID":"738fc587-0a87-41b5-b2b0-690fa92d754e","Type":"ContainerStarted","Data":"b7ef983c9536c1f6451ea69d63a6281964a5465fa0848979581d9f65b7c88e14"} Jan 30 08:41:52 crc kubenswrapper[4758]: I0130 08:41:52.387565 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:41:52 crc kubenswrapper[4758]: I0130 08:41:52.388026 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.927110 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d2cb9"] Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.930768 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovn-controller" containerID="cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda" gracePeriod=30 Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.930786 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="sbdb" containerID="cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26" gracePeriod=30 Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.930819 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="nbdb" containerID="cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee" gracePeriod=30 Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.930832 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kube-rbac-proxy-node" containerID="cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38" gracePeriod=30 Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.930845 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovn-acl-logging" containerID="cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3" gracePeriod=30 Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.930820 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4" gracePeriod=30 Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.930857 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="northd" containerID="cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32" gracePeriod=30 Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.974092 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" containerID="cri-o://b0101d7301a7b51eb2ab1a83d9b6004c067fa83b3b027df6dd62ac9569ce0353" gracePeriod=30 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.578196 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/2.log" Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.578997 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/1.log" Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.579066 4758 generic.go:334] "Generic (PLEG): container finished" podID="fac75e9c-fc94-4c83-8613-bce0f4744079" containerID="52cb65a07b895a3f9c811e540c2852dc09469aa1336caa2d4f74c566cc414a19" exitCode=2 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.579130 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99ddw" event={"ID":"fac75e9c-fc94-4c83-8613-bce0f4744079","Type":"ContainerDied","Data":"52cb65a07b895a3f9c811e540c2852dc09469aa1336caa2d4f74c566cc414a19"} Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.579170 4758 scope.go:117] "RemoveContainer" containerID="e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1" Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.579938 4758 scope.go:117] "RemoveContainer" containerID="52cb65a07b895a3f9c811e540c2852dc09469aa1336caa2d4f74c566cc414a19" Jan 30 08:41:59 crc kubenswrapper[4758]: E0130 08:41:59.580348 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-99ddw_openshift-multus(fac75e9c-fc94-4c83-8613-bce0f4744079)\"" pod="openshift-multus/multus-99ddw" podUID="fac75e9c-fc94-4c83-8613-bce0f4744079" Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.592594 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/3.log" Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.601483 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovn-acl-logging/0.log" Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602275 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovn-controller/0.log" Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602789 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="b0101d7301a7b51eb2ab1a83d9b6004c067fa83b3b027df6dd62ac9569ce0353" exitCode=0 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602825 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26" exitCode=0 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602837 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee" exitCode=0 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602851 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32" exitCode=0 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602861 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4" exitCode=0 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602873 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38" exitCode=0 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602882 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3" exitCode=143 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602894 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda" exitCode=143 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602928 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"b0101d7301a7b51eb2ab1a83d9b6004c067fa83b3b027df6dd62ac9569ce0353"} Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.603321 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26"} Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.603343 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee"} Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.603355 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32"} Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.603559 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4"} Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.603572 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38"} Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.603586 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3"} Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.603596 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda"} Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.765844 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/3.log" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.769798 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovn-acl-logging/0.log" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.770859 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovn-controller/0.log" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.771971 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850405 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-557d2"] Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850766 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kube-rbac-proxy-node" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850787 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kube-rbac-proxy-node" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850802 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="nbdb" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850812 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="nbdb" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850829 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="sbdb" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850837 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="sbdb" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850850 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850858 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850869 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kubecfg-setup" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850876 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kubecfg-setup" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850888 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850896 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850905 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850912 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850922 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovn-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850929 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovn-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850938 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovn-acl-logging" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850947 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovn-acl-logging" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850960 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850969 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850981 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="northd" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850988 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="northd" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.851002 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851010 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851505 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovn-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851523 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851535 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851547 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kube-rbac-proxy-node" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851557 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="nbdb" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851567 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovn-acl-logging" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851578 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="northd" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851587 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851596 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851607 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851618 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851631 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="sbdb" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.851776 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851787 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.854139 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866025 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-run-ovn-kubernetes\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866097 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovnkube-script-lib\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866130 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-systemd-units\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866169 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866256 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-env-overrides\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866328 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-kubelet\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866350 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-var-lib-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866411 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57x7v\" (UniqueName: \"kubernetes.io/projected/3d8c2967-cb4a-439c-a136-e9cf2bf41635-kube-api-access-57x7v\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866467 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-cni-netd\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866500 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-log-socket\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866529 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866558 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-slash\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866605 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-run-netns\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866669 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-node-log\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866695 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-ovn\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866722 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-cni-bin\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866760 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovn-node-metrics-cert\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866827 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovnkube-config\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866882 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-etc-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866937 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-systemd\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.967657 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-node-log\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.967731 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-bin\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.967791 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-script-lib\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.967841 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-env-overrides\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.967879 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-etc-openvswitch\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.967898 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-var-lib-openvswitch\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.967937 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-log-socket\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.967977 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-systemd\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968007 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-systemd-units\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968032 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-config\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968085 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-slash\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968125 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovn-node-metrics-cert\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968144 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-ovn-kubernetes\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968167 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-netd\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968184 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-openvswitch\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968221 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmdj9\" (UniqueName: \"kubernetes.io/projected/a682aa56-1a48-46dd-a06c-8cbaaeea7008-kube-api-access-jmdj9\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968255 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-netns\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968285 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-kubelet\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968323 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968368 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-ovn\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968526 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968586 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-env-overrides\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968612 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968639 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-kubelet\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968644 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968635 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-node-log" (OuterVolumeSpecName: "node-log") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968711 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-log-socket" (OuterVolumeSpecName: "log-socket") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968675 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968739 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-var-lib-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968873 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968914 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968941 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968978 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968667 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-var-lib-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969123 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57x7v\" (UniqueName: \"kubernetes.io/projected/3d8c2967-cb4a-439c-a136-e9cf2bf41635-kube-api-access-57x7v\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969166 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-cni-netd\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969196 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-log-socket\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969225 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-slash\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969255 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969295 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-run-netns\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969327 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-node-log\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969356 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-ovn\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969399 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-cni-bin\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969430 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovn-node-metrics-cert\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969493 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovnkube-config\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969526 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-etc-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969553 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-systemd\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969582 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-run-ovn-kubernetes\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969613 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovnkube-script-lib\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969642 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-systemd-units\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969668 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969708 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-kubelet\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969777 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969743 4758 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969858 4758 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969885 4758 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969909 4758 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969942 4758 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969965 4758 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969991 4758 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970013 4758 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970053 4758 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970069 4758 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970138 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-cni-bin\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970192 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-cni-netd\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970241 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-log-socket\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970294 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-slash\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970341 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970380 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-run-netns\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970419 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-node-log\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970451 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-ovn\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970486 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-run-ovn-kubernetes\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.971032 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-etc-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969575 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969699 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969745 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-slash" (OuterVolumeSpecName: "host-slash") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969749 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969770 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969788 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969889 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.971298 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-systemd\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.971410 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovnkube-config\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.971340 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-systemd-units\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.971835 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovnkube-script-lib\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.974401 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-env-overrides\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.975716 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.976030 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a682aa56-1a48-46dd-a06c-8cbaaeea7008-kube-api-access-jmdj9" (OuterVolumeSpecName: "kube-api-access-jmdj9") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "kube-api-access-jmdj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.986920 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovn-node-metrics-cert\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.991524 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.996685 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57x7v\" (UniqueName: \"kubernetes.io/projected/3d8c2967-cb4a-439c-a136-e9cf2bf41635-kube-api-access-57x7v\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.071912 4758 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072568 4758 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072585 4758 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072594 4758 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072605 4758 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072631 4758 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072639 4758 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072649 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmdj9\" (UniqueName: \"kubernetes.io/projected/a682aa56-1a48-46dd-a06c-8cbaaeea7008-kube-api-access-jmdj9\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072660 4758 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072669 4758 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.183536 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.479582 4758 scope.go:117] "RemoveContainer" containerID="8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540" Jan 30 08:42:01 crc kubenswrapper[4758]: W0130 08:42:01.540904 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d8c2967_cb4a_439c_a136_e9cf2bf41635.slice/crio-cd595b71ce331668cca6b01630641ec955581696e808d9c860ce200ddd7eb8c1 WatchSource:0}: Error finding container cd595b71ce331668cca6b01630641ec955581696e808d9c860ce200ddd7eb8c1: Status 404 returned error can't find the container with id cd595b71ce331668cca6b01630641ec955581696e808d9c860ce200ddd7eb8c1 Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.622095 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/2.log" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.624306 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"cd595b71ce331668cca6b01630641ec955581696e808d9c860ce200ddd7eb8c1"} Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.629514 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovn-acl-logging/0.log" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.634635 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovn-controller/0.log" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.635298 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"8d70505dbacf380ad755907b0497b938a8a8916ec3d2072e37bb1856843d9c78"} Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.635360 4758 scope.go:117] "RemoveContainer" containerID="b0101d7301a7b51eb2ab1a83d9b6004c067fa83b3b027df6dd62ac9569ce0353" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.635397 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.687029 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d2cb9"] Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.692314 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d2cb9"] Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.777018 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" path="/var/lib/kubelet/pods/a682aa56-1a48-46dd-a06c-8cbaaeea7008/volumes" Jan 30 08:42:02 crc kubenswrapper[4758]: I0130 08:42:02.653206 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gzhzw" event={"ID":"3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb","Type":"ContainerStarted","Data":"8377d73a4684655c2a82cb528ad0fb1b898b43892124aba97c643b512a64648b"} Jan 30 08:42:02 crc kubenswrapper[4758]: I0130 08:42:02.655601 4758 generic.go:334] "Generic (PLEG): container finished" podID="3d8c2967-cb4a-439c-a136-e9cf2bf41635" containerID="58dd1c3becbb82dc17e2b1b5f2fc944ca7c5583be8eb280dd1814d2c877c4b6b" exitCode=0 Jan 30 08:42:02 crc kubenswrapper[4758]: I0130 08:42:02.656855 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerDied","Data":"58dd1c3becbb82dc17e2b1b5f2fc944ca7c5583be8eb280dd1814d2c877c4b6b"} Jan 30 08:42:02 crc kubenswrapper[4758]: I0130 08:42:02.681087 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-gzhzw" podStartSLOduration=2.910203879 podStartE2EDuration="13.681057518s" podCreationTimestamp="2026-01-30 08:41:49 +0000 UTC" firstStartedPulling="2026-01-30 08:41:50.71862958 +0000 UTC m=+715.690941131" lastFinishedPulling="2026-01-30 08:42:01.489483199 +0000 UTC m=+726.461794770" observedRunningTime="2026-01-30 08:42:02.67891428 +0000 UTC m=+727.651225831" watchObservedRunningTime="2026-01-30 08:42:02.681057518 +0000 UTC m=+727.653369069" Jan 30 08:42:02 crc kubenswrapper[4758]: I0130 08:42:02.845909 4758 scope.go:117] "RemoveContainer" containerID="527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.001702 4758 scope.go:117] "RemoveContainer" containerID="38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.028911 4758 scope.go:117] "RemoveContainer" containerID="4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.069749 4758 scope.go:117] "RemoveContainer" containerID="29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.145803 4758 scope.go:117] "RemoveContainer" containerID="86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.173581 4758 scope.go:117] "RemoveContainer" containerID="0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.201536 4758 scope.go:117] "RemoveContainer" containerID="1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.222258 4758 scope.go:117] "RemoveContainer" containerID="4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.664075 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" event={"ID":"b55a68e5-f198-4525-9d07-2acdcb906d36","Type":"ContainerStarted","Data":"55f9dfbd031e898d8f56752e6f4dfeb70527bb7340f05a9baebca7593e77cfd5"} Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.664294 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.667737 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k" event={"ID":"738fc587-0a87-41b5-b2b0-690fa92d754e","Type":"ContainerStarted","Data":"372f884e9ea99c21a8b85066994ca44362b21b89d8f91c39facae410df48956e"} Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.670233 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"b814f7e08f55cc08d1cdadd4666991121543574ad19b6bb1896293d340442000"} Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.710069 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k" podStartSLOduration=2.40806927 podStartE2EDuration="14.710001744s" podCreationTimestamp="2026-01-30 08:41:49 +0000 UTC" firstStartedPulling="2026-01-30 08:41:50.702014074 +0000 UTC m=+715.674325625" lastFinishedPulling="2026-01-30 08:42:03.003946548 +0000 UTC m=+727.976258099" observedRunningTime="2026-01-30 08:42:03.708524218 +0000 UTC m=+728.680835789" watchObservedRunningTime="2026-01-30 08:42:03.710001744 +0000 UTC m=+728.682313315" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.713687 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" podStartSLOduration=2.135347036 podStartE2EDuration="14.71365972s" podCreationTimestamp="2026-01-30 08:41:49 +0000 UTC" firstStartedPulling="2026-01-30 08:41:50.438894984 +0000 UTC m=+715.411206535" lastFinishedPulling="2026-01-30 08:42:03.017207668 +0000 UTC m=+727.989519219" observedRunningTime="2026-01-30 08:42:03.6928059 +0000 UTC m=+728.665117451" watchObservedRunningTime="2026-01-30 08:42:03.71365972 +0000 UTC m=+728.685971291" Jan 30 08:42:04 crc kubenswrapper[4758]: I0130 08:42:04.694949 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"595c178e39d07f52d751d6e0f1cdab175348de3d322f2715dc38a2eb70b3bfce"} Jan 30 08:42:04 crc kubenswrapper[4758]: I0130 08:42:04.695676 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"b17a964ab4800d994bb8b580b45aa2134a5a3fdcf5a0e6a87de944b836848c71"} Jan 30 08:42:04 crc kubenswrapper[4758]: I0130 08:42:04.695730 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"369e2afd4f78c8246fcc075eeafce29b8fe130ce6b8e56c2cfac0441f8beecc8"} Jan 30 08:42:05 crc kubenswrapper[4758]: I0130 08:42:05.712335 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"13f62c64a08e71dc87d3e1605e904c4b12751a8aeb95147da3075aed15b26457"} Jan 30 08:42:05 crc kubenswrapper[4758]: I0130 08:42:05.713421 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"8bedf04c03bace50024c96304db46454c9a0aea841aab257eec91fc697f48f29"} Jan 30 08:42:05 crc kubenswrapper[4758]: I0130 08:42:05.713543 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"920aea4add09045c60a3fcf096bd8b842e677c46130406634200f5d83451de18"} Jan 30 08:42:08 crc kubenswrapper[4758]: I0130 08:42:08.735167 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"19dc707a12956bede6344b0ae76c746d26cd4c8502210b96e037552735be6c49"} Jan 30 08:42:08 crc kubenswrapper[4758]: I0130 08:42:08.736848 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:08 crc kubenswrapper[4758]: I0130 08:42:08.737080 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:08 crc kubenswrapper[4758]: I0130 08:42:08.737098 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:08 crc kubenswrapper[4758]: I0130 08:42:08.772547 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" podStartSLOduration=8.772524038 podStartE2EDuration="8.772524038s" podCreationTimestamp="2026-01-30 08:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:42:08.769792471 +0000 UTC m=+733.742104032" watchObservedRunningTime="2026-01-30 08:42:08.772524038 +0000 UTC m=+733.744835589" Jan 30 08:42:08 crc kubenswrapper[4758]: I0130 08:42:08.781935 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:08 crc kubenswrapper[4758]: I0130 08:42:08.783619 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:10 crc kubenswrapper[4758]: I0130 08:42:10.186764 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" Jan 30 08:42:11 crc kubenswrapper[4758]: I0130 08:42:11.769786 4758 scope.go:117] "RemoveContainer" containerID="52cb65a07b895a3f9c811e540c2852dc09469aa1336caa2d4f74c566cc414a19" Jan 30 08:42:12 crc kubenswrapper[4758]: I0130 08:42:12.768168 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/2.log" Jan 30 08:42:12 crc kubenswrapper[4758]: I0130 08:42:12.768867 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99ddw" event={"ID":"fac75e9c-fc94-4c83-8613-bce0f4744079","Type":"ContainerStarted","Data":"8cef8c13864175ab940b1c8ed4f018e46504930a6b92827b9643af64ee512783"} Jan 30 08:42:22 crc kubenswrapper[4758]: I0130 08:42:22.387544 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:42:22 crc kubenswrapper[4758]: I0130 08:42:22.388956 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:42:31 crc kubenswrapper[4758]: I0130 08:42:31.209603 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:42 crc kubenswrapper[4758]: I0130 08:42:42.601024 4758 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.669111 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j"] Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.671188 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.674581 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.680599 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j"] Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.685988 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.686122 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.686152 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8g7g\" (UniqueName: \"kubernetes.io/projected/e581bd2a-bbb4-476e-821e-f55ba597f41e-kube-api-access-d8g7g\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.787248 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.787296 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.787329 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8g7g\" (UniqueName: \"kubernetes.io/projected/e581bd2a-bbb4-476e-821e-f55ba597f41e-kube-api-access-d8g7g\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.787986 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.788489 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.817053 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8g7g\" (UniqueName: \"kubernetes.io/projected/e581bd2a-bbb4-476e-821e-f55ba597f41e-kube-api-access-d8g7g\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.999647 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:48 crc kubenswrapper[4758]: I0130 08:42:48.226856 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j"] Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.003351 4758 generic.go:334] "Generic (PLEG): container finished" podID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerID="8012f80827dd5c036964251a703d12dbaac9d88ff31a50056c753ab46fd88585" exitCode=0 Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.003417 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" event={"ID":"e581bd2a-bbb4-476e-821e-f55ba597f41e","Type":"ContainerDied","Data":"8012f80827dd5c036964251a703d12dbaac9d88ff31a50056c753ab46fd88585"} Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.003497 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" event={"ID":"e581bd2a-bbb4-476e-821e-f55ba597f41e","Type":"ContainerStarted","Data":"b3e0f62cb4bc5d8b6a20dfc2bbf5a0d0e42ad874a77713041b53630c0c9d86e1"} Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.744664 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-772qh"] Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.746465 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.756208 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-772qh"] Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.823385 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-utilities\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.823662 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-catalog-content\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.824474 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctqh9\" (UniqueName: \"kubernetes.io/projected/c5a576a2-9393-4f2d-be9a-9838b65db788-kube-api-access-ctqh9\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.925808 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-catalog-content\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.926450 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctqh9\" (UniqueName: \"kubernetes.io/projected/c5a576a2-9393-4f2d-be9a-9838b65db788-kube-api-access-ctqh9\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.926498 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-utilities\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.927136 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-utilities\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.927781 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-catalog-content\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.952187 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctqh9\" (UniqueName: \"kubernetes.io/projected/c5a576a2-9393-4f2d-be9a-9838b65db788-kube-api-access-ctqh9\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:50 crc kubenswrapper[4758]: I0130 08:42:50.071465 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:50 crc kubenswrapper[4758]: I0130 08:42:50.374958 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-772qh"] Jan 30 08:42:50 crc kubenswrapper[4758]: W0130 08:42:50.385113 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5a576a2_9393_4f2d_be9a_9838b65db788.slice/crio-547b02c7c24847c7c021c15373098498e1bae9c391de3b79295090820c70ad25 WatchSource:0}: Error finding container 547b02c7c24847c7c021c15373098498e1bae9c391de3b79295090820c70ad25: Status 404 returned error can't find the container with id 547b02c7c24847c7c021c15373098498e1bae9c391de3b79295090820c70ad25 Jan 30 08:42:51 crc kubenswrapper[4758]: I0130 08:42:51.019606 4758 generic.go:334] "Generic (PLEG): container finished" podID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerID="21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c" exitCode=0 Jan 30 08:42:51 crc kubenswrapper[4758]: I0130 08:42:51.019694 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-772qh" event={"ID":"c5a576a2-9393-4f2d-be9a-9838b65db788","Type":"ContainerDied","Data":"21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c"} Jan 30 08:42:51 crc kubenswrapper[4758]: I0130 08:42:51.020212 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-772qh" event={"ID":"c5a576a2-9393-4f2d-be9a-9838b65db788","Type":"ContainerStarted","Data":"547b02c7c24847c7c021c15373098498e1bae9c391de3b79295090820c70ad25"} Jan 30 08:42:51 crc kubenswrapper[4758]: I0130 08:42:51.031250 4758 generic.go:334] "Generic (PLEG): container finished" podID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerID="22a66147960313ccb4344ea98be24dcf3ea44f81d2d13024cfb477679df4dfbb" exitCode=0 Jan 30 08:42:51 crc kubenswrapper[4758]: I0130 08:42:51.031302 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" event={"ID":"e581bd2a-bbb4-476e-821e-f55ba597f41e","Type":"ContainerDied","Data":"22a66147960313ccb4344ea98be24dcf3ea44f81d2d13024cfb477679df4dfbb"} Jan 30 08:42:52 crc kubenswrapper[4758]: I0130 08:42:52.042258 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-772qh" event={"ID":"c5a576a2-9393-4f2d-be9a-9838b65db788","Type":"ContainerStarted","Data":"4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669"} Jan 30 08:42:52 crc kubenswrapper[4758]: I0130 08:42:52.047106 4758 generic.go:334] "Generic (PLEG): container finished" podID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerID="1356d91fae4bc7767083755b0d4679f06ccf69b868527fde791cde1618c25d0a" exitCode=0 Jan 30 08:42:52 crc kubenswrapper[4758]: I0130 08:42:52.047277 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" event={"ID":"e581bd2a-bbb4-476e-821e-f55ba597f41e","Type":"ContainerDied","Data":"1356d91fae4bc7767083755b0d4679f06ccf69b868527fde791cde1618c25d0a"} Jan 30 08:42:52 crc kubenswrapper[4758]: I0130 08:42:52.387052 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:42:52 crc kubenswrapper[4758]: I0130 08:42:52.387110 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:42:52 crc kubenswrapper[4758]: I0130 08:42:52.387158 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:42:52 crc kubenswrapper[4758]: I0130 08:42:52.387826 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"deb29cef8b897137735d2465c08026013481066ac7d08e4c83f4d9efbbed9a89"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:42:52 crc kubenswrapper[4758]: I0130 08:42:52.387885 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://deb29cef8b897137735d2465c08026013481066ac7d08e4c83f4d9efbbed9a89" gracePeriod=600 Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.057023 4758 generic.go:334] "Generic (PLEG): container finished" podID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerID="4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669" exitCode=0 Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.057116 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-772qh" event={"ID":"c5a576a2-9393-4f2d-be9a-9838b65db788","Type":"ContainerDied","Data":"4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669"} Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.060621 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="deb29cef8b897137735d2465c08026013481066ac7d08e4c83f4d9efbbed9a89" exitCode=0 Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.060715 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"deb29cef8b897137735d2465c08026013481066ac7d08e4c83f4d9efbbed9a89"} Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.060770 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"69b903b761c0949dbe1210bdef2b095568ae802b78c69042a2686b209bdd29e6"} Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.060791 4758 scope.go:117] "RemoveContainer" containerID="12e346bef216c1af5460ebc9181d8141b01cef9a0c0f222c452a8bead4bdf6f9" Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.361389 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.378014 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-bundle\") pod \"e581bd2a-bbb4-476e-821e-f55ba597f41e\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.378130 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8g7g\" (UniqueName: \"kubernetes.io/projected/e581bd2a-bbb4-476e-821e-f55ba597f41e-kube-api-access-d8g7g\") pod \"e581bd2a-bbb4-476e-821e-f55ba597f41e\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.378322 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-util\") pod \"e581bd2a-bbb4-476e-821e-f55ba597f41e\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.379659 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-bundle" (OuterVolumeSpecName: "bundle") pod "e581bd2a-bbb4-476e-821e-f55ba597f41e" (UID: "e581bd2a-bbb4-476e-821e-f55ba597f41e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.388499 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e581bd2a-bbb4-476e-821e-f55ba597f41e-kube-api-access-d8g7g" (OuterVolumeSpecName: "kube-api-access-d8g7g") pod "e581bd2a-bbb4-476e-821e-f55ba597f41e" (UID: "e581bd2a-bbb4-476e-821e-f55ba597f41e"). InnerVolumeSpecName "kube-api-access-d8g7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.410509 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-util" (OuterVolumeSpecName: "util") pod "e581bd2a-bbb4-476e-821e-f55ba597f41e" (UID: "e581bd2a-bbb4-476e-821e-f55ba597f41e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.480252 4758 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.480340 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8g7g\" (UniqueName: \"kubernetes.io/projected/e581bd2a-bbb4-476e-821e-f55ba597f41e-kube-api-access-d8g7g\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.480356 4758 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-util\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:54 crc kubenswrapper[4758]: I0130 08:42:54.073456 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" event={"ID":"e581bd2a-bbb4-476e-821e-f55ba597f41e","Type":"ContainerDied","Data":"b3e0f62cb4bc5d8b6a20dfc2bbf5a0d0e42ad874a77713041b53630c0c9d86e1"} Jan 30 08:42:54 crc kubenswrapper[4758]: I0130 08:42:54.074183 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3e0f62cb4bc5d8b6a20dfc2bbf5a0d0e42ad874a77713041b53630c0c9d86e1" Jan 30 08:42:54 crc kubenswrapper[4758]: I0130 08:42:54.073497 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:55 crc kubenswrapper[4758]: I0130 08:42:55.091163 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-772qh" event={"ID":"c5a576a2-9393-4f2d-be9a-9838b65db788","Type":"ContainerStarted","Data":"33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e"} Jan 30 08:42:55 crc kubenswrapper[4758]: I0130 08:42:55.119511 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-772qh" podStartSLOduration=2.63789184 podStartE2EDuration="6.119479891s" podCreationTimestamp="2026-01-30 08:42:49 +0000 UTC" firstStartedPulling="2026-01-30 08:42:51.028194688 +0000 UTC m=+776.000506249" lastFinishedPulling="2026-01-30 08:42:54.509782749 +0000 UTC m=+779.482094300" observedRunningTime="2026-01-30 08:42:55.113400737 +0000 UTC m=+780.085712328" watchObservedRunningTime="2026-01-30 08:42:55.119479891 +0000 UTC m=+780.091791442" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.084553 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r9vhp"] Jan 30 08:42:58 crc kubenswrapper[4758]: E0130 08:42:58.085354 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerName="extract" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.085372 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerName="extract" Jan 30 08:42:58 crc kubenswrapper[4758]: E0130 08:42:58.085396 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerName="util" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.085403 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerName="util" Jan 30 08:42:58 crc kubenswrapper[4758]: E0130 08:42:58.085411 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerName="pull" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.085417 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerName="pull" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.085538 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerName="extract" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.086264 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-r9vhp" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.089526 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.090314 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-c9stz" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.090498 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.105484 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r9vhp"] Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.154841 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q2nj\" (UniqueName: \"kubernetes.io/projected/1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1-kube-api-access-6q2nj\") pod \"nmstate-operator-646758c888-r9vhp\" (UID: \"1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1\") " pod="openshift-nmstate/nmstate-operator-646758c888-r9vhp" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.255886 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q2nj\" (UniqueName: \"kubernetes.io/projected/1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1-kube-api-access-6q2nj\") pod \"nmstate-operator-646758c888-r9vhp\" (UID: \"1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1\") " pod="openshift-nmstate/nmstate-operator-646758c888-r9vhp" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.288547 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q2nj\" (UniqueName: \"kubernetes.io/projected/1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1-kube-api-access-6q2nj\") pod \"nmstate-operator-646758c888-r9vhp\" (UID: \"1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1\") " pod="openshift-nmstate/nmstate-operator-646758c888-r9vhp" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.410567 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-r9vhp" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.734512 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r9vhp"] Jan 30 08:42:59 crc kubenswrapper[4758]: I0130 08:42:59.117887 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-r9vhp" event={"ID":"1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1","Type":"ContainerStarted","Data":"e7e537fc52ef0e35371b58f873d73081815cc17f84a2bfbd5a1288ac12f4fc67"} Jan 30 08:43:00 crc kubenswrapper[4758]: I0130 08:43:00.072478 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:43:00 crc kubenswrapper[4758]: I0130 08:43:00.072558 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:43:01 crc kubenswrapper[4758]: I0130 08:43:01.125409 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-772qh" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="registry-server" probeResult="failure" output=< Jan 30 08:43:01 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 08:43:01 crc kubenswrapper[4758]: > Jan 30 08:43:02 crc kubenswrapper[4758]: I0130 08:43:02.141110 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-r9vhp" event={"ID":"1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1","Type":"ContainerStarted","Data":"e4d2416678b0b8bb5dbe44644a74eb9a0014be11575be6a2c758d8f95d23a15f"} Jan 30 08:43:02 crc kubenswrapper[4758]: I0130 08:43:02.159213 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-r9vhp" podStartSLOduration=1.795149173 podStartE2EDuration="4.159182599s" podCreationTimestamp="2026-01-30 08:42:58 +0000 UTC" firstStartedPulling="2026-01-30 08:42:58.755324864 +0000 UTC m=+783.727636415" lastFinishedPulling="2026-01-30 08:43:01.11935829 +0000 UTC m=+786.091669841" observedRunningTime="2026-01-30 08:43:02.157133405 +0000 UTC m=+787.129444986" watchObservedRunningTime="2026-01-30 08:43:02.159182599 +0000 UTC m=+787.131494150" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.691146 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-9d5tr"] Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.693153 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.703519 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-hlt5d" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.704104 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2"] Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.704867 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.707501 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.717723 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-9d5tr"] Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.720599 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vmfs\" (UniqueName: \"kubernetes.io/projected/767b8ab5-0385-4ee5-a65c-20d58550812e-kube-api-access-5vmfs\") pod \"nmstate-metrics-54757c584b-9d5tr\" (UID: \"767b8ab5-0385-4ee5-a65c-20d58550812e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.746227 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2"] Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.763377 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-jjswr"] Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.764894 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.822238 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b3b445ff-f326-4f7a-9f01-557ea0ac488e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tp5p2\" (UID: \"b3b445ff-f326-4f7a-9f01-557ea0ac488e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.822622 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmkcv\" (UniqueName: \"kubernetes.io/projected/619a7108-d329-4b73-84eb-4258a2bfe118-kube-api-access-mmkcv\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.822765 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-dbus-socket\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.822974 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-ovs-socket\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.823117 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vmfs\" (UniqueName: \"kubernetes.io/projected/767b8ab5-0385-4ee5-a65c-20d58550812e-kube-api-access-5vmfs\") pod \"nmstate-metrics-54757c584b-9d5tr\" (UID: \"767b8ab5-0385-4ee5-a65c-20d58550812e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.823251 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-nmstate-lock\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.824469 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g64fp\" (UniqueName: \"kubernetes.io/projected/b3b445ff-f326-4f7a-9f01-557ea0ac488e-kube-api-access-g64fp\") pod \"nmstate-webhook-8474b5b9d8-tp5p2\" (UID: \"b3b445ff-f326-4f7a-9f01-557ea0ac488e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.847695 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vmfs\" (UniqueName: \"kubernetes.io/projected/767b8ab5-0385-4ee5-a65c-20d58550812e-kube-api-access-5vmfs\") pod \"nmstate-metrics-54757c584b-9d5tr\" (UID: \"767b8ab5-0385-4ee5-a65c-20d58550812e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.913456 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj"] Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.914574 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.918404 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.918414 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.923779 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-nm8hd" Jan 30 08:43:07 crc kubenswrapper[4758]: E0130 08:43:07.927510 4758 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 30 08:43:07 crc kubenswrapper[4758]: E0130 08:43:07.927598 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3b445ff-f326-4f7a-9f01-557ea0ac488e-tls-key-pair podName:b3b445ff-f326-4f7a-9f01-557ea0ac488e nodeName:}" failed. No retries permitted until 2026-01-30 08:43:08.427572465 +0000 UTC m=+793.399884016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/b3b445ff-f326-4f7a-9f01-557ea0ac488e-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-tp5p2" (UID: "b3b445ff-f326-4f7a-9f01-557ea0ac488e") : secret "openshift-nmstate-webhook" not found Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.927550 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b3b445ff-f326-4f7a-9f01-557ea0ac488e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tp5p2\" (UID: \"b3b445ff-f326-4f7a-9f01-557ea0ac488e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.928426 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmkcv\" (UniqueName: \"kubernetes.io/projected/619a7108-d329-4b73-84eb-4258a2bfe118-kube-api-access-mmkcv\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.928462 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-dbus-socket\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.928542 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-ovs-socket\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.928672 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-nmstate-lock\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.928703 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g64fp\" (UniqueName: \"kubernetes.io/projected/b3b445ff-f326-4f7a-9f01-557ea0ac488e-kube-api-access-g64fp\") pod \"nmstate-webhook-8474b5b9d8-tp5p2\" (UID: \"b3b445ff-f326-4f7a-9f01-557ea0ac488e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.928967 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-dbus-socket\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.929015 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-ovs-socket\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.929057 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-nmstate-lock\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.946078 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g64fp\" (UniqueName: \"kubernetes.io/projected/b3b445ff-f326-4f7a-9f01-557ea0ac488e-kube-api-access-g64fp\") pod \"nmstate-webhook-8474b5b9d8-tp5p2\" (UID: \"b3b445ff-f326-4f7a-9f01-557ea0ac488e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.952281 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmkcv\" (UniqueName: \"kubernetes.io/projected/619a7108-d329-4b73-84eb-4258a2bfe118-kube-api-access-mmkcv\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.961170 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj"] Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.026809 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.084248 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.135283 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/edf1a46e-1ddb-45c6-b545-911d0f651ee9-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.135339 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd7tn\" (UniqueName: \"kubernetes.io/projected/edf1a46e-1ddb-45c6-b545-911d0f651ee9-kube-api-access-qd7tn\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.135381 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/edf1a46e-1ddb-45c6-b545-911d0f651ee9-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.135463 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-56f8498f99-h8fr5"] Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.136432 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.199949 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jjswr" event={"ID":"619a7108-d329-4b73-84eb-4258a2bfe118","Type":"ContainerStarted","Data":"45895ae2e4834a412ff7fcb3335322f58e676cd3627f4a3eb53e671583ed9844"} Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.208774 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56f8498f99-h8fr5"] Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253072 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcfjf\" (UniqueName: \"kubernetes.io/projected/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-kube-api-access-dcfjf\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253491 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-oauth-serving-cert\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253525 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/edf1a46e-1ddb-45c6-b545-911d0f651ee9-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253545 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-config\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253571 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd7tn\" (UniqueName: \"kubernetes.io/projected/edf1a46e-1ddb-45c6-b545-911d0f651ee9-kube-api-access-qd7tn\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253597 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/edf1a46e-1ddb-45c6-b545-911d0f651ee9-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253629 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-oauth-config\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253645 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-trusted-ca-bundle\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253673 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-serving-cert\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253706 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-service-ca\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.255264 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/edf1a46e-1ddb-45c6-b545-911d0f651ee9-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.261746 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/edf1a46e-1ddb-45c6-b545-911d0f651ee9-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.291738 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd7tn\" (UniqueName: \"kubernetes.io/projected/edf1a46e-1ddb-45c6-b545-911d0f651ee9-kube-api-access-qd7tn\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.355176 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcfjf\" (UniqueName: \"kubernetes.io/projected/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-kube-api-access-dcfjf\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.355307 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-oauth-serving-cert\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.355332 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-config\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.355371 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-oauth-config\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.355387 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-trusted-ca-bundle\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.355410 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-serving-cert\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.355445 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-service-ca\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.356404 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-service-ca\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.357358 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-oauth-serving-cert\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.358168 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-trusted-ca-bundle\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.360462 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-oauth-config\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.361094 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-config\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.376069 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-serving-cert\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.409359 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcfjf\" (UniqueName: \"kubernetes.io/projected/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-kube-api-access-dcfjf\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.456793 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b3b445ff-f326-4f7a-9f01-557ea0ac488e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tp5p2\" (UID: \"b3b445ff-f326-4f7a-9f01-557ea0ac488e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.461313 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.466795 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b3b445ff-f326-4f7a-9f01-557ea0ac488e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tp5p2\" (UID: \"b3b445ff-f326-4f7a-9f01-557ea0ac488e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.528127 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-9d5tr"] Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.538235 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.632979 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.845510 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj"] Jan 30 08:43:08 crc kubenswrapper[4758]: W0130 08:43:08.853186 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedf1a46e_1ddb_45c6_b545_911d0f651ee9.slice/crio-162e1b32ab8ebe97547d442742b7ee9d0680b7cba6388a9652ec200aacb60b18 WatchSource:0}: Error finding container 162e1b32ab8ebe97547d442742b7ee9d0680b7cba6388a9652ec200aacb60b18: Status 404 returned error can't find the container with id 162e1b32ab8ebe97547d442742b7ee9d0680b7cba6388a9652ec200aacb60b18 Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.899972 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2"] Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.992533 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56f8498f99-h8fr5"] Jan 30 08:43:09 crc kubenswrapper[4758]: I0130 08:43:09.208004 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" event={"ID":"edf1a46e-1ddb-45c6-b545-911d0f651ee9","Type":"ContainerStarted","Data":"162e1b32ab8ebe97547d442742b7ee9d0680b7cba6388a9652ec200aacb60b18"} Jan 30 08:43:09 crc kubenswrapper[4758]: I0130 08:43:09.209872 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" event={"ID":"767b8ab5-0385-4ee5-a65c-20d58550812e","Type":"ContainerStarted","Data":"61218ee06e860d4810de00528cdf4b89f598247f92db6ed94b39fe24c030bdc9"} Jan 30 08:43:09 crc kubenswrapper[4758]: I0130 08:43:09.211781 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" event={"ID":"b3b445ff-f326-4f7a-9f01-557ea0ac488e","Type":"ContainerStarted","Data":"0925d2b376eef54054cff6c78963b3250bb0044d238a21f8290c2a06ae917a04"} Jan 30 08:43:09 crc kubenswrapper[4758]: I0130 08:43:09.212796 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56f8498f99-h8fr5" event={"ID":"abd6f4b8-59ba-4439-b155-9a283fbc4b3f","Type":"ContainerStarted","Data":"9fcab79ffc5670c61c3339bbf9ce6d69794843ecfc576fb91882524d57fdee36"} Jan 30 08:43:10 crc kubenswrapper[4758]: I0130 08:43:10.129974 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:43:10 crc kubenswrapper[4758]: I0130 08:43:10.192254 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:43:10 crc kubenswrapper[4758]: I0130 08:43:10.226477 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56f8498f99-h8fr5" event={"ID":"abd6f4b8-59ba-4439-b155-9a283fbc4b3f","Type":"ContainerStarted","Data":"56ccd8385b23655c33ba34f74b898eb838ee72a21760b3705a3c53a8b0a07c72"} Jan 30 08:43:10 crc kubenswrapper[4758]: I0130 08:43:10.258565 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-56f8498f99-h8fr5" podStartSLOduration=2.258537318 podStartE2EDuration="2.258537318s" podCreationTimestamp="2026-01-30 08:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:43:10.244934236 +0000 UTC m=+795.217245797" watchObservedRunningTime="2026-01-30 08:43:10.258537318 +0000 UTC m=+795.230848879" Jan 30 08:43:10 crc kubenswrapper[4758]: I0130 08:43:10.373699 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-772qh"] Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.233964 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-772qh" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="registry-server" containerID="cri-o://33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e" gracePeriod=2 Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.647245 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.820750 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctqh9\" (UniqueName: \"kubernetes.io/projected/c5a576a2-9393-4f2d-be9a-9838b65db788-kube-api-access-ctqh9\") pod \"c5a576a2-9393-4f2d-be9a-9838b65db788\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.820800 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-utilities\") pod \"c5a576a2-9393-4f2d-be9a-9838b65db788\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.820833 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-catalog-content\") pod \"c5a576a2-9393-4f2d-be9a-9838b65db788\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.822335 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-utilities" (OuterVolumeSpecName: "utilities") pod "c5a576a2-9393-4f2d-be9a-9838b65db788" (UID: "c5a576a2-9393-4f2d-be9a-9838b65db788"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.827434 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a576a2-9393-4f2d-be9a-9838b65db788-kube-api-access-ctqh9" (OuterVolumeSpecName: "kube-api-access-ctqh9") pod "c5a576a2-9393-4f2d-be9a-9838b65db788" (UID: "c5a576a2-9393-4f2d-be9a-9838b65db788"). InnerVolumeSpecName "kube-api-access-ctqh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.921839 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctqh9\" (UniqueName: \"kubernetes.io/projected/c5a576a2-9393-4f2d-be9a-9838b65db788-kube-api-access-ctqh9\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.922297 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.953231 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5a576a2-9393-4f2d-be9a-9838b65db788" (UID: "c5a576a2-9393-4f2d-be9a-9838b65db788"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.023090 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.242161 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" event={"ID":"767b8ab5-0385-4ee5-a65c-20d58550812e","Type":"ContainerStarted","Data":"b86d0cbb8f23390f3b6c78572cf6ec14bd320a0326fb326faf4a2e5e823d4a47"} Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.245980 4758 generic.go:334] "Generic (PLEG): container finished" podID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerID="33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e" exitCode=0 Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.246170 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.248226 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-772qh" event={"ID":"c5a576a2-9393-4f2d-be9a-9838b65db788","Type":"ContainerDied","Data":"33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e"} Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.248278 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-772qh" event={"ID":"c5a576a2-9393-4f2d-be9a-9838b65db788","Type":"ContainerDied","Data":"547b02c7c24847c7c021c15373098498e1bae9c391de3b79295090820c70ad25"} Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.248307 4758 scope.go:117] "RemoveContainer" containerID="33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.254648 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" event={"ID":"b3b445ff-f326-4f7a-9f01-557ea0ac488e","Type":"ContainerStarted","Data":"1ce5fb66e3661009471de20890f3ae7bbabd4e5c657b723bbe5f2cd02660eb20"} Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.254730 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.259749 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jjswr" event={"ID":"619a7108-d329-4b73-84eb-4258a2bfe118","Type":"ContainerStarted","Data":"f7ca91b2991049b0fe4eaf7c7c82407c77b9ba3ac87941ac9888be454775df81"} Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.259845 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.262674 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" event={"ID":"edf1a46e-1ddb-45c6-b545-911d0f651ee9","Type":"ContainerStarted","Data":"3b83e15b267938fe6bee4924ac1d862999ed51b63cc0cc5ae918d82deaec66ca"} Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.291343 4758 scope.go:117] "RemoveContainer" containerID="4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.293069 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" podStartSLOduration=2.738296622 podStartE2EDuration="5.293015362s" podCreationTimestamp="2026-01-30 08:43:07 +0000 UTC" firstStartedPulling="2026-01-30 08:43:08.910702922 +0000 UTC m=+793.883014473" lastFinishedPulling="2026-01-30 08:43:11.465421662 +0000 UTC m=+796.437733213" observedRunningTime="2026-01-30 08:43:12.281787235 +0000 UTC m=+797.254098786" watchObservedRunningTime="2026-01-30 08:43:12.293015362 +0000 UTC m=+797.265326913" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.308178 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-772qh"] Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.327850 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-772qh"] Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.345782 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-jjswr" podStartSLOduration=2.026049551 podStartE2EDuration="5.345729499s" podCreationTimestamp="2026-01-30 08:43:07 +0000 UTC" firstStartedPulling="2026-01-30 08:43:08.147265022 +0000 UTC m=+793.119576573" lastFinishedPulling="2026-01-30 08:43:11.46694497 +0000 UTC m=+796.439256521" observedRunningTime="2026-01-30 08:43:12.342695213 +0000 UTC m=+797.315006764" watchObservedRunningTime="2026-01-30 08:43:12.345729499 +0000 UTC m=+797.318041050" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.348737 4758 scope.go:117] "RemoveContainer" containerID="21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.367201 4758 scope.go:117] "RemoveContainer" containerID="33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e" Jan 30 08:43:12 crc kubenswrapper[4758]: E0130 08:43:12.368583 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e\": container with ID starting with 33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e not found: ID does not exist" containerID="33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.368625 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e"} err="failed to get container status \"33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e\": rpc error: code = NotFound desc = could not find container \"33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e\": container with ID starting with 33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e not found: ID does not exist" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.368655 4758 scope.go:117] "RemoveContainer" containerID="4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669" Jan 30 08:43:12 crc kubenswrapper[4758]: E0130 08:43:12.369071 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669\": container with ID starting with 4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669 not found: ID does not exist" containerID="4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.369091 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669"} err="failed to get container status \"4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669\": rpc error: code = NotFound desc = could not find container \"4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669\": container with ID starting with 4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669 not found: ID does not exist" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.369107 4758 scope.go:117] "RemoveContainer" containerID="21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c" Jan 30 08:43:12 crc kubenswrapper[4758]: E0130 08:43:12.373139 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c\": container with ID starting with 21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c not found: ID does not exist" containerID="21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.373197 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c"} err="failed to get container status \"21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c\": rpc error: code = NotFound desc = could not find container \"21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c\": container with ID starting with 21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c not found: ID does not exist" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.371879 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" podStartSLOduration=2.762533424 podStartE2EDuration="5.3718608s" podCreationTimestamp="2026-01-30 08:43:07 +0000 UTC" firstStartedPulling="2026-01-30 08:43:08.854858146 +0000 UTC m=+793.827169687" lastFinishedPulling="2026-01-30 08:43:11.464185512 +0000 UTC m=+796.436497063" observedRunningTime="2026-01-30 08:43:12.368201684 +0000 UTC m=+797.340513235" watchObservedRunningTime="2026-01-30 08:43:12.3718608 +0000 UTC m=+797.344172351" Jan 30 08:43:13 crc kubenswrapper[4758]: I0130 08:43:13.782040 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" path="/var/lib/kubelet/pods/c5a576a2-9393-4f2d-be9a-9838b65db788/volumes" Jan 30 08:43:14 crc kubenswrapper[4758]: I0130 08:43:14.290959 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" event={"ID":"767b8ab5-0385-4ee5-a65c-20d58550812e","Type":"ContainerStarted","Data":"ea191eb859d02d4c9faffa4631985a445c2a18b04fdcb175113e7ab494127fb8"} Jan 30 08:43:14 crc kubenswrapper[4758]: I0130 08:43:14.315045 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" podStartSLOduration=1.795156567 podStartE2EDuration="7.315007999s" podCreationTimestamp="2026-01-30 08:43:07 +0000 UTC" firstStartedPulling="2026-01-30 08:43:08.568065455 +0000 UTC m=+793.540376996" lastFinishedPulling="2026-01-30 08:43:14.087916877 +0000 UTC m=+799.060228428" observedRunningTime="2026-01-30 08:43:14.310733894 +0000 UTC m=+799.283045445" watchObservedRunningTime="2026-01-30 08:43:14.315007999 +0000 UTC m=+799.287319560" Jan 30 08:43:18 crc kubenswrapper[4758]: I0130 08:43:18.130938 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:18 crc kubenswrapper[4758]: I0130 08:43:18.464910 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:18 crc kubenswrapper[4758]: I0130 08:43:18.466451 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:18 crc kubenswrapper[4758]: I0130 08:43:18.472559 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:19 crc kubenswrapper[4758]: I0130 08:43:19.350152 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:19 crc kubenswrapper[4758]: I0130 08:43:19.429342 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rxgh6"] Jan 30 08:43:28 crc kubenswrapper[4758]: I0130 08:43:28.638790 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.664699 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r"] Jan 30 08:43:41 crc kubenswrapper[4758]: E0130 08:43:41.666312 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="extract-utilities" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.666394 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="extract-utilities" Jan 30 08:43:41 crc kubenswrapper[4758]: E0130 08:43:41.666467 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="extract-content" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.666543 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="extract-content" Jan 30 08:43:41 crc kubenswrapper[4758]: E0130 08:43:41.666619 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="registry-server" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.666688 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="registry-server" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.667136 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="registry-server" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.668057 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.670359 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.681186 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r"] Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.705234 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.705311 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.705345 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m55sx\" (UniqueName: \"kubernetes.io/projected/749e159a-10a8-4704-a263-3ec389807647-kube-api-access-m55sx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.806029 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.806157 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.806183 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m55sx\" (UniqueName: \"kubernetes.io/projected/749e159a-10a8-4704-a263-3ec389807647-kube-api-access-m55sx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.806514 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.806626 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.825980 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m55sx\" (UniqueName: \"kubernetes.io/projected/749e159a-10a8-4704-a263-3ec389807647-kube-api-access-m55sx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.987404 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:42 crc kubenswrapper[4758]: I0130 08:43:42.230847 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r"] Jan 30 08:43:42 crc kubenswrapper[4758]: I0130 08:43:42.511206 4758 generic.go:334] "Generic (PLEG): container finished" podID="749e159a-10a8-4704-a263-3ec389807647" containerID="a1330550da406b3a314a2c4212ad1f066a03110886e5d8024b4f4a295f618305" exitCode=0 Jan 30 08:43:42 crc kubenswrapper[4758]: I0130 08:43:42.511253 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" event={"ID":"749e159a-10a8-4704-a263-3ec389807647","Type":"ContainerDied","Data":"a1330550da406b3a314a2c4212ad1f066a03110886e5d8024b4f4a295f618305"} Jan 30 08:43:42 crc kubenswrapper[4758]: I0130 08:43:42.511320 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" event={"ID":"749e159a-10a8-4704-a263-3ec389807647","Type":"ContainerStarted","Data":"1709d0d8c255b42989366e9b1fc41959e4146ebf0442e6e462d4a5a0dd35530b"} Jan 30 08:43:44 crc kubenswrapper[4758]: I0130 08:43:44.487506 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-rxgh6" podUID="df190322-1e43-4ae4-ac74-78702c913801" containerName="console" containerID="cri-o://360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3" gracePeriod=15 Jan 30 08:43:44 crc kubenswrapper[4758]: I0130 08:43:44.521560 4758 generic.go:334] "Generic (PLEG): container finished" podID="749e159a-10a8-4704-a263-3ec389807647" containerID="1b82ce90d5ccfceb91d2dcb88837d441c4158ede174198a27ed1d455438532ca" exitCode=0 Jan 30 08:43:44 crc kubenswrapper[4758]: I0130 08:43:44.521830 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" event={"ID":"749e159a-10a8-4704-a263-3ec389807647","Type":"ContainerDied","Data":"1b82ce90d5ccfceb91d2dcb88837d441c4158ede174198a27ed1d455438532ca"} Jan 30 08:43:44 crc kubenswrapper[4758]: I0130 08:43:44.896330 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rxgh6_df190322-1e43-4ae4-ac74-78702c913801/console/0.log" Jan 30 08:43:44 crc kubenswrapper[4758]: I0130 08:43:44.896394 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.060545 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-console-config\") pod \"df190322-1e43-4ae4-ac74-78702c913801\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.060816 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-trusted-ca-bundle\") pod \"df190322-1e43-4ae4-ac74-78702c913801\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.060936 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-oauth-serving-cert\") pod \"df190322-1e43-4ae4-ac74-78702c913801\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061062 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-serving-cert\") pod \"df190322-1e43-4ae4-ac74-78702c913801\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061188 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-service-ca\") pod \"df190322-1e43-4ae4-ac74-78702c913801\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061281 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tl5t\" (UniqueName: \"kubernetes.io/projected/df190322-1e43-4ae4-ac74-78702c913801-kube-api-access-8tl5t\") pod \"df190322-1e43-4ae4-ac74-78702c913801\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061369 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-oauth-config\") pod \"df190322-1e43-4ae4-ac74-78702c913801\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061544 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "df190322-1e43-4ae4-ac74-78702c913801" (UID: "df190322-1e43-4ae4-ac74-78702c913801"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061661 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "df190322-1e43-4ae4-ac74-78702c913801" (UID: "df190322-1e43-4ae4-ac74-78702c913801"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061554 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-console-config" (OuterVolumeSpecName: "console-config") pod "df190322-1e43-4ae4-ac74-78702c913801" (UID: "df190322-1e43-4ae4-ac74-78702c913801"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061847 4758 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061939 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.062010 4758 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.062031 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-service-ca" (OuterVolumeSpecName: "service-ca") pod "df190322-1e43-4ae4-ac74-78702c913801" (UID: "df190322-1e43-4ae4-ac74-78702c913801"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.066838 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df190322-1e43-4ae4-ac74-78702c913801-kube-api-access-8tl5t" (OuterVolumeSpecName: "kube-api-access-8tl5t") pod "df190322-1e43-4ae4-ac74-78702c913801" (UID: "df190322-1e43-4ae4-ac74-78702c913801"). InnerVolumeSpecName "kube-api-access-8tl5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.067320 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "df190322-1e43-4ae4-ac74-78702c913801" (UID: "df190322-1e43-4ae4-ac74-78702c913801"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.068971 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "df190322-1e43-4ae4-ac74-78702c913801" (UID: "df190322-1e43-4ae4-ac74-78702c913801"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.162626 4758 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.162655 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tl5t\" (UniqueName: \"kubernetes.io/projected/df190322-1e43-4ae4-ac74-78702c913801-kube-api-access-8tl5t\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.162667 4758 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.162675 4758 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.528907 4758 generic.go:334] "Generic (PLEG): container finished" podID="749e159a-10a8-4704-a263-3ec389807647" containerID="539d0a2948c93c60e222e24187e4ed5c60f13aa4a1fa6df7c697a4475bfc8e01" exitCode=0 Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.528991 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" event={"ID":"749e159a-10a8-4704-a263-3ec389807647","Type":"ContainerDied","Data":"539d0a2948c93c60e222e24187e4ed5c60f13aa4a1fa6df7c697a4475bfc8e01"} Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.532201 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rxgh6_df190322-1e43-4ae4-ac74-78702c913801/console/0.log" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.532244 4758 generic.go:334] "Generic (PLEG): container finished" podID="df190322-1e43-4ae4-ac74-78702c913801" containerID="360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3" exitCode=2 Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.532272 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rxgh6" event={"ID":"df190322-1e43-4ae4-ac74-78702c913801","Type":"ContainerDied","Data":"360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3"} Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.532298 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rxgh6" event={"ID":"df190322-1e43-4ae4-ac74-78702c913801","Type":"ContainerDied","Data":"773264022dc035335d95553995a9e9c2d29eb6d0d5d52eedc14a38ba389878c4"} Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.532318 4758 scope.go:117] "RemoveContainer" containerID="360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.532427 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.555308 4758 scope.go:117] "RemoveContainer" containerID="360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3" Jan 30 08:43:45 crc kubenswrapper[4758]: E0130 08:43:45.555603 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3\": container with ID starting with 360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3 not found: ID does not exist" containerID="360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.555626 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3"} err="failed to get container status \"360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3\": rpc error: code = NotFound desc = could not find container \"360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3\": container with ID starting with 360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3 not found: ID does not exist" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.564683 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rxgh6"] Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.568069 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-rxgh6"] Jan 30 08:43:45 crc kubenswrapper[4758]: E0130 08:43:45.599572 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf190322_1e43_4ae4_ac74_78702c913801.slice/crio-773264022dc035335d95553995a9e9c2d29eb6d0d5d52eedc14a38ba389878c4\": RecentStats: unable to find data in memory cache]" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.776255 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df190322-1e43-4ae4-ac74-78702c913801" path="/var/lib/kubelet/pods/df190322-1e43-4ae4-ac74-78702c913801/volumes" Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.751705 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.885312 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-bundle\") pod \"749e159a-10a8-4704-a263-3ec389807647\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.885398 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-util\") pod \"749e159a-10a8-4704-a263-3ec389807647\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.885517 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m55sx\" (UniqueName: \"kubernetes.io/projected/749e159a-10a8-4704-a263-3ec389807647-kube-api-access-m55sx\") pod \"749e159a-10a8-4704-a263-3ec389807647\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.886868 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-bundle" (OuterVolumeSpecName: "bundle") pod "749e159a-10a8-4704-a263-3ec389807647" (UID: "749e159a-10a8-4704-a263-3ec389807647"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.892097 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/749e159a-10a8-4704-a263-3ec389807647-kube-api-access-m55sx" (OuterVolumeSpecName: "kube-api-access-m55sx") pod "749e159a-10a8-4704-a263-3ec389807647" (UID: "749e159a-10a8-4704-a263-3ec389807647"). InnerVolumeSpecName "kube-api-access-m55sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.898687 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-util" (OuterVolumeSpecName: "util") pod "749e159a-10a8-4704-a263-3ec389807647" (UID: "749e159a-10a8-4704-a263-3ec389807647"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.987000 4758 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.987033 4758 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-util\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.987059 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m55sx\" (UniqueName: \"kubernetes.io/projected/749e159a-10a8-4704-a263-3ec389807647-kube-api-access-m55sx\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:47 crc kubenswrapper[4758]: I0130 08:43:47.544973 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" event={"ID":"749e159a-10a8-4704-a263-3ec389807647","Type":"ContainerDied","Data":"1709d0d8c255b42989366e9b1fc41959e4146ebf0442e6e462d4a5a0dd35530b"} Jan 30 08:43:47 crc kubenswrapper[4758]: I0130 08:43:47.545025 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1709d0d8c255b42989366e9b1fc41959e4146ebf0442e6e462d4a5a0dd35530b" Jan 30 08:43:47 crc kubenswrapper[4758]: I0130 08:43:47.545071 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.088324 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4"] Jan 30 08:43:55 crc kubenswrapper[4758]: E0130 08:43:55.089133 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="749e159a-10a8-4704-a263-3ec389807647" containerName="extract" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.089150 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="749e159a-10a8-4704-a263-3ec389807647" containerName="extract" Jan 30 08:43:55 crc kubenswrapper[4758]: E0130 08:43:55.089160 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df190322-1e43-4ae4-ac74-78702c913801" containerName="console" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.089167 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="df190322-1e43-4ae4-ac74-78702c913801" containerName="console" Jan 30 08:43:55 crc kubenswrapper[4758]: E0130 08:43:55.089180 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="749e159a-10a8-4704-a263-3ec389807647" containerName="util" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.089188 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="749e159a-10a8-4704-a263-3ec389807647" containerName="util" Jan 30 08:43:55 crc kubenswrapper[4758]: E0130 08:43:55.089202 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="749e159a-10a8-4704-a263-3ec389807647" containerName="pull" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.089209 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="749e159a-10a8-4704-a263-3ec389807647" containerName="pull" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.089337 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="749e159a-10a8-4704-a263-3ec389807647" containerName="extract" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.089351 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="df190322-1e43-4ae4-ac74-78702c913801" containerName="console" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.089917 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.100779 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.101906 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.104606 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.104865 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.104937 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-qldd4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.140482 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4"] Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.191232 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29744c9b-d424-4ac9-b224-fe0956166373-webhook-cert\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.191292 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29744c9b-d424-4ac9-b224-fe0956166373-apiservice-cert\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.191379 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q4mv\" (UniqueName: \"kubernetes.io/projected/29744c9b-d424-4ac9-b224-fe0956166373-kube-api-access-7q4mv\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.293141 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q4mv\" (UniqueName: \"kubernetes.io/projected/29744c9b-d424-4ac9-b224-fe0956166373-kube-api-access-7q4mv\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.293237 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29744c9b-d424-4ac9-b224-fe0956166373-webhook-cert\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.293266 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29744c9b-d424-4ac9-b224-fe0956166373-apiservice-cert\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.300720 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29744c9b-d424-4ac9-b224-fe0956166373-apiservice-cert\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.312299 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29744c9b-d424-4ac9-b224-fe0956166373-webhook-cert\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.338904 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q4mv\" (UniqueName: \"kubernetes.io/projected/29744c9b-d424-4ac9-b224-fe0956166373-kube-api-access-7q4mv\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.404831 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.571531 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr"] Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.572492 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.574245 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.574435 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-7cwlc" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.574887 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.626800 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr"] Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.698891 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcvdn\" (UniqueName: \"kubernetes.io/projected/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-kube-api-access-gcvdn\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.698966 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-apiservice-cert\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.698991 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-webhook-cert\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.800421 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcvdn\" (UniqueName: \"kubernetes.io/projected/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-kube-api-access-gcvdn\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.800504 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-apiservice-cert\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.800529 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-webhook-cert\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.845641 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.859413 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-webhook-cert\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.859901 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-apiservice-cert\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.863442 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcvdn\" (UniqueName: \"kubernetes.io/projected/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-kube-api-access-gcvdn\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.893679 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-7cwlc" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.905140 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.975374 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4"] Jan 30 08:43:55 crc kubenswrapper[4758]: W0130 08:43:55.985762 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29744c9b_d424_4ac9_b224_fe0956166373.slice/crio-cdb53d7ef7f68f9e9c815230c8003f61d7a416280fbe4fb6c8fc83aa797aa889 WatchSource:0}: Error finding container cdb53d7ef7f68f9e9c815230c8003f61d7a416280fbe4fb6c8fc83aa797aa889: Status 404 returned error can't find the container with id cdb53d7ef7f68f9e9c815230c8003f61d7a416280fbe4fb6c8fc83aa797aa889 Jan 30 08:43:56 crc kubenswrapper[4758]: I0130 08:43:56.185843 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr"] Jan 30 08:43:56 crc kubenswrapper[4758]: I0130 08:43:56.588296 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" event={"ID":"29744c9b-d424-4ac9-b224-fe0956166373","Type":"ContainerStarted","Data":"cdb53d7ef7f68f9e9c815230c8003f61d7a416280fbe4fb6c8fc83aa797aa889"} Jan 30 08:43:56 crc kubenswrapper[4758]: I0130 08:43:56.589737 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" event={"ID":"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5","Type":"ContainerStarted","Data":"476884484fef409049f88b6d1273b20e66d6f353661cead64c7718bfa8998b7d"} Jan 30 08:43:59 crc kubenswrapper[4758]: I0130 08:43:59.608520 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" event={"ID":"29744c9b-d424-4ac9-b224-fe0956166373","Type":"ContainerStarted","Data":"624c9783e5df601e4eb9fa9d9ff41b2749f66eefb95954fa6337ab34d8746560"} Jan 30 08:43:59 crc kubenswrapper[4758]: I0130 08:43:59.608894 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:59 crc kubenswrapper[4758]: I0130 08:43:59.643524 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" podStartSLOduration=1.26199278 podStartE2EDuration="4.643506098s" podCreationTimestamp="2026-01-30 08:43:55 +0000 UTC" firstStartedPulling="2026-01-30 08:43:55.98890687 +0000 UTC m=+840.961218421" lastFinishedPulling="2026-01-30 08:43:59.370420178 +0000 UTC m=+844.342731739" observedRunningTime="2026-01-30 08:43:59.642330091 +0000 UTC m=+844.614641642" watchObservedRunningTime="2026-01-30 08:43:59.643506098 +0000 UTC m=+844.615817659" Jan 30 08:44:02 crc kubenswrapper[4758]: I0130 08:44:02.633590 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" event={"ID":"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5","Type":"ContainerStarted","Data":"d170ad99e54219da543d100f434197ce5a9698b95de4fd59c050f5d7c53d9f88"} Jan 30 08:44:02 crc kubenswrapper[4758]: I0130 08:44:02.634235 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:44:02 crc kubenswrapper[4758]: I0130 08:44:02.706826 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" podStartSLOduration=1.821817979 podStartE2EDuration="7.706805339s" podCreationTimestamp="2026-01-30 08:43:55 +0000 UTC" firstStartedPulling="2026-01-30 08:43:56.199798597 +0000 UTC m=+841.172110148" lastFinishedPulling="2026-01-30 08:44:02.084785957 +0000 UTC m=+847.057097508" observedRunningTime="2026-01-30 08:44:02.702829904 +0000 UTC m=+847.675141485" watchObservedRunningTime="2026-01-30 08:44:02.706805339 +0000 UTC m=+847.679116910" Jan 30 08:44:15 crc kubenswrapper[4758]: I0130 08:44:15.906760 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:44:35 crc kubenswrapper[4758]: I0130 08:44:35.407404 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.215618 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-vfjq6"] Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.218552 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.221829 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-dblws" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.222096 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.222216 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.230097 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn"] Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.236508 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn"] Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.236572 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.246262 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.323235 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-sockets\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.323275 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-conf\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.323322 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z52pl\" (UniqueName: \"kubernetes.io/projected/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-kube-api-access-z52pl\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.323348 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-reloader\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.323368 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-startup\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.323395 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics-certs\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.323437 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.332825 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-67xgc"] Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.333628 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.358355 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.358586 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-v5sql" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.358702 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.358837 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.415631 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-28gk8"] Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.416801 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.424626 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics-certs\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: E0130 08:44:36.424764 4758 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 30 08:44:36 crc kubenswrapper[4758]: E0130 08:44:36.424823 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics-certs podName:c8f7b5d4-29b4-4741-9bcf-a993dbbce575 nodeName:}" failed. No retries permitted until 2026-01-30 08:44:36.924803094 +0000 UTC m=+881.897114645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics-certs") pod "frr-k8s-vfjq6" (UID: "c8f7b5d4-29b4-4741-9bcf-a993dbbce575") : secret "frr-k8s-certs-secret" not found Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425077 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5gfl\" (UniqueName: \"kubernetes.io/projected/0393e366-eeba-40c9-8020-9b16d0092dfd-kube-api-access-s5gfl\") pod \"frr-k8s-webhook-server-7df86c4f6c-kjhjn\" (UID: \"0393e366-eeba-40c9-8020-9b16d0092dfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425125 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-metrics-certs\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425148 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425168 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/10c38902-7117-4dc3-ad90-eb26dd9656de-metallb-excludel2\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425191 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46jx2\" (UniqueName: \"kubernetes.io/projected/10c38902-7117-4dc3-ad90-eb26dd9656de-kube-api-access-46jx2\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425207 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-sockets\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425224 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-conf\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425241 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425271 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z52pl\" (UniqueName: \"kubernetes.io/projected/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-kube-api-access-z52pl\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425288 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-reloader\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425304 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0393e366-eeba-40c9-8020-9b16d0092dfd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kjhjn\" (UID: \"0393e366-eeba-40c9-8020-9b16d0092dfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425322 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-startup\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.426274 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.426365 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-conf\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.426457 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-reloader\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.426625 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-startup\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.426651 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-sockets\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.432711 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.470818 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-28gk8"] Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.481816 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z52pl\" (UniqueName: \"kubernetes.io/projected/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-kube-api-access-z52pl\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526329 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5gfl\" (UniqueName: \"kubernetes.io/projected/0393e366-eeba-40c9-8020-9b16d0092dfd-kube-api-access-s5gfl\") pod \"frr-k8s-webhook-server-7df86c4f6c-kjhjn\" (UID: \"0393e366-eeba-40c9-8020-9b16d0092dfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526379 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30993dee-7712-48e7-a156-86293a84ea40-metrics-certs\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526428 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-metrics-certs\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526452 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/10c38902-7117-4dc3-ad90-eb26dd9656de-metallb-excludel2\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526495 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46jx2\" (UniqueName: \"kubernetes.io/projected/10c38902-7117-4dc3-ad90-eb26dd9656de-kube-api-access-46jx2\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526514 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfxkz\" (UniqueName: \"kubernetes.io/projected/30993dee-7712-48e7-a156-86293a84ea40-kube-api-access-vfxkz\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526534 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526583 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30993dee-7712-48e7-a156-86293a84ea40-cert\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526603 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0393e366-eeba-40c9-8020-9b16d0092dfd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kjhjn\" (UID: \"0393e366-eeba-40c9-8020-9b16d0092dfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.529734 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/10c38902-7117-4dc3-ad90-eb26dd9656de-metallb-excludel2\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.531058 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0393e366-eeba-40c9-8020-9b16d0092dfd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kjhjn\" (UID: \"0393e366-eeba-40c9-8020-9b16d0092dfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:36 crc kubenswrapper[4758]: E0130 08:44:36.531458 4758 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 08:44:36 crc kubenswrapper[4758]: E0130 08:44:36.531499 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist podName:10c38902-7117-4dc3-ad90-eb26dd9656de nodeName:}" failed. No retries permitted until 2026-01-30 08:44:37.031487136 +0000 UTC m=+882.003798677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist") pod "speaker-67xgc" (UID: "10c38902-7117-4dc3-ad90-eb26dd9656de") : secret "metallb-memberlist" not found Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.555491 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-metrics-certs\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.566658 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5gfl\" (UniqueName: \"kubernetes.io/projected/0393e366-eeba-40c9-8020-9b16d0092dfd-kube-api-access-s5gfl\") pod \"frr-k8s-webhook-server-7df86c4f6c-kjhjn\" (UID: \"0393e366-eeba-40c9-8020-9b16d0092dfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.570329 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.581088 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46jx2\" (UniqueName: \"kubernetes.io/projected/10c38902-7117-4dc3-ad90-eb26dd9656de-kube-api-access-46jx2\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.628006 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfxkz\" (UniqueName: \"kubernetes.io/projected/30993dee-7712-48e7-a156-86293a84ea40-kube-api-access-vfxkz\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.628095 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30993dee-7712-48e7-a156-86293a84ea40-cert\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.628144 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30993dee-7712-48e7-a156-86293a84ea40-metrics-certs\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.631870 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30993dee-7712-48e7-a156-86293a84ea40-metrics-certs\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.636315 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.644827 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30993dee-7712-48e7-a156-86293a84ea40-cert\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.651754 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfxkz\" (UniqueName: \"kubernetes.io/projected/30993dee-7712-48e7-a156-86293a84ea40-kube-api-access-vfxkz\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.734270 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.931823 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics-certs\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.940017 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics-certs\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.033992 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:37 crc kubenswrapper[4758]: E0130 08:44:37.034264 4758 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 08:44:37 crc kubenswrapper[4758]: E0130 08:44:37.034382 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist podName:10c38902-7117-4dc3-ad90-eb26dd9656de nodeName:}" failed. No retries permitted until 2026-01-30 08:44:38.034329963 +0000 UTC m=+883.006641514 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist") pod "speaker-67xgc" (UID: "10c38902-7117-4dc3-ad90-eb26dd9656de") : secret "metallb-memberlist" not found Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.058059 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn"] Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.105153 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-28gk8"] Jan 30 08:44:37 crc kubenswrapper[4758]: W0130 08:44:37.110193 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30993dee_7712_48e7_a156_86293a84ea40.slice/crio-83b2ba871823a0c324846e00ba32cb6730aae27210c4eb0683c37e2121e68bb5 WatchSource:0}: Error finding container 83b2ba871823a0c324846e00ba32cb6730aae27210c4eb0683c37e2121e68bb5: Status 404 returned error can't find the container with id 83b2ba871823a0c324846e00ba32cb6730aae27210c4eb0683c37e2121e68bb5 Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.154491 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.807084 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" event={"ID":"0393e366-eeba-40c9-8020-9b16d0092dfd","Type":"ContainerStarted","Data":"da3ca2cdc27d5ffd520fd635e97f4aaa57b72211a323ee724ba6b0d2c5e4af80"} Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.808533 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-28gk8" event={"ID":"30993dee-7712-48e7-a156-86293a84ea40","Type":"ContainerStarted","Data":"ebe9472869a92cb053722bf97bf776314958ef06fd1cf63bdb8f3608045a8b97"} Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.808564 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-28gk8" event={"ID":"30993dee-7712-48e7-a156-86293a84ea40","Type":"ContainerStarted","Data":"3e54a6c107cf56e6bc4150199ff9bc7049cd6443e346b0ba698ec17dedc5d67d"} Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.808582 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-28gk8" event={"ID":"30993dee-7712-48e7-a156-86293a84ea40","Type":"ContainerStarted","Data":"83b2ba871823a0c324846e00ba32cb6730aae27210c4eb0683c37e2121e68bb5"} Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.809561 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.810495 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerStarted","Data":"717c004b2b3954b8edb26aabb4d187dc7b2a366b896978bcd1ef98115c02652d"} Jan 30 08:44:38 crc kubenswrapper[4758]: I0130 08:44:38.049035 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:38 crc kubenswrapper[4758]: I0130 08:44:38.054539 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:38 crc kubenswrapper[4758]: I0130 08:44:38.153548 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-67xgc" Jan 30 08:44:38 crc kubenswrapper[4758]: I0130 08:44:38.838227 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-67xgc" event={"ID":"10c38902-7117-4dc3-ad90-eb26dd9656de","Type":"ContainerStarted","Data":"3d19062d3ced847174b716ec36288a7d03906fca45d21cd130c81c8ebf4692f4"} Jan 30 08:44:38 crc kubenswrapper[4758]: I0130 08:44:38.838268 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-67xgc" event={"ID":"10c38902-7117-4dc3-ad90-eb26dd9656de","Type":"ContainerStarted","Data":"be40c7a84bc977470834422b11a82801eb3ebebb4c5e007b6a973c2dc940ad28"} Jan 30 08:44:39 crc kubenswrapper[4758]: I0130 08:44:39.849569 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-67xgc" event={"ID":"10c38902-7117-4dc3-ad90-eb26dd9656de","Type":"ContainerStarted","Data":"01d615c550515ba0a1a32634ef47dce8e001df825f334aeebca691722684a74a"} Jan 30 08:44:39 crc kubenswrapper[4758]: I0130 08:44:39.866217 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-28gk8" podStartSLOduration=3.866197403 podStartE2EDuration="3.866197403s" podCreationTimestamp="2026-01-30 08:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:44:37.830167987 +0000 UTC m=+882.802479558" watchObservedRunningTime="2026-01-30 08:44:39.866197403 +0000 UTC m=+884.838508954" Jan 30 08:44:39 crc kubenswrapper[4758]: I0130 08:44:39.868208 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-67xgc" podStartSLOduration=3.868201166 podStartE2EDuration="3.868201166s" podCreationTimestamp="2026-01-30 08:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:44:39.866540583 +0000 UTC m=+884.838852124" watchObservedRunningTime="2026-01-30 08:44:39.868201166 +0000 UTC m=+884.840512717" Jan 30 08:44:41 crc kubenswrapper[4758]: I0130 08:44:41.002017 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-67xgc" Jan 30 08:44:48 crc kubenswrapper[4758]: I0130 08:44:48.157601 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-67xgc" Jan 30 08:44:50 crc kubenswrapper[4758]: I0130 08:44:50.214082 4758 generic.go:334] "Generic (PLEG): container finished" podID="c8f7b5d4-29b4-4741-9bcf-a993dbbce575" containerID="773297938fde593aa5746bf664e90261f77554f915b037fa617cd59e99188af1" exitCode=0 Jan 30 08:44:50 crc kubenswrapper[4758]: I0130 08:44:50.214371 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerDied","Data":"773297938fde593aa5746bf664e90261f77554f915b037fa617cd59e99188af1"} Jan 30 08:44:50 crc kubenswrapper[4758]: I0130 08:44:50.219724 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" event={"ID":"0393e366-eeba-40c9-8020-9b16d0092dfd","Type":"ContainerStarted","Data":"544c34bafdf4ddd7f6c1105ce53a5a195635e27980839bf34c0b7eacc43dcc86"} Jan 30 08:44:50 crc kubenswrapper[4758]: I0130 08:44:50.220403 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:50 crc kubenswrapper[4758]: I0130 08:44:50.299292 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" podStartSLOduration=1.442547219 podStartE2EDuration="14.299266621s" podCreationTimestamp="2026-01-30 08:44:36 +0000 UTC" firstStartedPulling="2026-01-30 08:44:37.064719148 +0000 UTC m=+882.037030699" lastFinishedPulling="2026-01-30 08:44:49.92143855 +0000 UTC m=+894.893750101" observedRunningTime="2026-01-30 08:44:50.274549364 +0000 UTC m=+895.246860915" watchObservedRunningTime="2026-01-30 08:44:50.299266621 +0000 UTC m=+895.271578192" Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.227787 4758 generic.go:334] "Generic (PLEG): container finished" podID="c8f7b5d4-29b4-4741-9bcf-a993dbbce575" containerID="45f332d8f7698818dc6e493762afd7053beb5f35fc8b21b73d8bd865f62a8dc0" exitCode=0 Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.229380 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerDied","Data":"45f332d8f7698818dc6e493762afd7053beb5f35fc8b21b73d8bd865f62a8dc0"} Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.827434 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-mdv5w"] Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.828993 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mdv5w" Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.831226 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.831618 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-rxr69" Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.831623 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.851145 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mdv5w"] Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.936734 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tqmx\" (UniqueName: \"kubernetes.io/projected/94969b49-36b1-4fcf-9fb3-54b56db5b8f6-kube-api-access-7tqmx\") pod \"openstack-operator-index-mdv5w\" (UID: \"94969b49-36b1-4fcf-9fb3-54b56db5b8f6\") " pod="openstack-operators/openstack-operator-index-mdv5w" Jan 30 08:44:52 crc kubenswrapper[4758]: I0130 08:44:52.038383 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tqmx\" (UniqueName: \"kubernetes.io/projected/94969b49-36b1-4fcf-9fb3-54b56db5b8f6-kube-api-access-7tqmx\") pod \"openstack-operator-index-mdv5w\" (UID: \"94969b49-36b1-4fcf-9fb3-54b56db5b8f6\") " pod="openstack-operators/openstack-operator-index-mdv5w" Jan 30 08:44:52 crc kubenswrapper[4758]: I0130 08:44:52.064268 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tqmx\" (UniqueName: \"kubernetes.io/projected/94969b49-36b1-4fcf-9fb3-54b56db5b8f6-kube-api-access-7tqmx\") pod \"openstack-operator-index-mdv5w\" (UID: \"94969b49-36b1-4fcf-9fb3-54b56db5b8f6\") " pod="openstack-operators/openstack-operator-index-mdv5w" Jan 30 08:44:52 crc kubenswrapper[4758]: I0130 08:44:52.152303 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mdv5w" Jan 30 08:44:52 crc kubenswrapper[4758]: I0130 08:44:52.240906 4758 generic.go:334] "Generic (PLEG): container finished" podID="c8f7b5d4-29b4-4741-9bcf-a993dbbce575" containerID="8cb1da3d8bf62ef21118c10b0bc9dc75a479757108dfeab5310ac8c2e5a8f3ae" exitCode=0 Jan 30 08:44:52 crc kubenswrapper[4758]: I0130 08:44:52.241206 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerDied","Data":"8cb1da3d8bf62ef21118c10b0bc9dc75a479757108dfeab5310ac8c2e5a8f3ae"} Jan 30 08:44:52 crc kubenswrapper[4758]: I0130 08:44:52.387747 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:44:52 crc kubenswrapper[4758]: I0130 08:44:52.387816 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:44:52 crc kubenswrapper[4758]: I0130 08:44:52.423015 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mdv5w"] Jan 30 08:44:52 crc kubenswrapper[4758]: W0130 08:44:52.430958 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94969b49_36b1_4fcf_9fb3_54b56db5b8f6.slice/crio-d531cb5ccc7016645c87fc3ee30629a2dbd9c42a9ab3709a5c8c3fdcb1e1c06f WatchSource:0}: Error finding container d531cb5ccc7016645c87fc3ee30629a2dbd9c42a9ab3709a5c8c3fdcb1e1c06f: Status 404 returned error can't find the container with id d531cb5ccc7016645c87fc3ee30629a2dbd9c42a9ab3709a5c8c3fdcb1e1c06f Jan 30 08:44:53 crc kubenswrapper[4758]: I0130 08:44:53.248050 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerStarted","Data":"2c1c74700c9bcc888bcc960984c74204a45d8221b2b14efe16cbbf2d10f5efe5"} Jan 30 08:44:53 crc kubenswrapper[4758]: I0130 08:44:53.249175 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerStarted","Data":"34b6d60c439f116164b388d5ceb4ad0586358ece65d107884c91456757381b30"} Jan 30 08:44:53 crc kubenswrapper[4758]: I0130 08:44:53.249284 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mdv5w" event={"ID":"94969b49-36b1-4fcf-9fb3-54b56db5b8f6","Type":"ContainerStarted","Data":"d531cb5ccc7016645c87fc3ee30629a2dbd9c42a9ab3709a5c8c3fdcb1e1c06f"} Jan 30 08:44:54 crc kubenswrapper[4758]: I0130 08:44:54.279156 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerStarted","Data":"c62a320e0370cdb3f732fb2eeb91ed4cd48443ed51f58c5a1953293fb3b638ea"} Jan 30 08:44:55 crc kubenswrapper[4758]: I0130 08:44:55.001155 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mdv5w"] Jan 30 08:44:55 crc kubenswrapper[4758]: I0130 08:44:55.608179 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zh8ld"] Jan 30 08:44:55 crc kubenswrapper[4758]: I0130 08:44:55.609021 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:44:55 crc kubenswrapper[4758]: I0130 08:44:55.619518 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zh8ld"] Jan 30 08:44:55 crc kubenswrapper[4758]: I0130 08:44:55.693822 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9qsn\" (UniqueName: \"kubernetes.io/projected/43724cfc-11f7-4ded-9561-4bde1020015f-kube-api-access-d9qsn\") pod \"openstack-operator-index-zh8ld\" (UID: \"43724cfc-11f7-4ded-9561-4bde1020015f\") " pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:44:55 crc kubenswrapper[4758]: I0130 08:44:55.794898 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9qsn\" (UniqueName: \"kubernetes.io/projected/43724cfc-11f7-4ded-9561-4bde1020015f-kube-api-access-d9qsn\") pod \"openstack-operator-index-zh8ld\" (UID: \"43724cfc-11f7-4ded-9561-4bde1020015f\") " pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:44:55 crc kubenswrapper[4758]: I0130 08:44:55.834953 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9qsn\" (UniqueName: \"kubernetes.io/projected/43724cfc-11f7-4ded-9561-4bde1020015f-kube-api-access-d9qsn\") pod \"openstack-operator-index-zh8ld\" (UID: \"43724cfc-11f7-4ded-9561-4bde1020015f\") " pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:44:55 crc kubenswrapper[4758]: I0130 08:44:55.932504 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:44:56 crc kubenswrapper[4758]: I0130 08:44:56.298348 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerStarted","Data":"7af26f108b37c5184a8ef155aa0a255cc45c06d954560d6de60e851cd91e8ea0"} Jan 30 08:44:56 crc kubenswrapper[4758]: I0130 08:44:56.298414 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerStarted","Data":"37f324624ea32820799303f521f70cfb5c3fe381b0fc5dc439a42fa4b7a67985"} Jan 30 08:44:56 crc kubenswrapper[4758]: I0130 08:44:56.299431 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mdv5w" event={"ID":"94969b49-36b1-4fcf-9fb3-54b56db5b8f6","Type":"ContainerStarted","Data":"863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622"} Jan 30 08:44:56 crc kubenswrapper[4758]: I0130 08:44:56.299614 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-mdv5w" podUID="94969b49-36b1-4fcf-9fb3-54b56db5b8f6" containerName="registry-server" containerID="cri-o://863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622" gracePeriod=2 Jan 30 08:44:56 crc kubenswrapper[4758]: I0130 08:44:56.323274 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-mdv5w" podStartSLOduration=2.040726619 podStartE2EDuration="5.323219416s" podCreationTimestamp="2026-01-30 08:44:51 +0000 UTC" firstStartedPulling="2026-01-30 08:44:52.434225365 +0000 UTC m=+897.406536916" lastFinishedPulling="2026-01-30 08:44:55.716718162 +0000 UTC m=+900.689029713" observedRunningTime="2026-01-30 08:44:56.321552144 +0000 UTC m=+901.293863705" watchObservedRunningTime="2026-01-30 08:44:56.323219416 +0000 UTC m=+901.295530967" Jan 30 08:44:56 crc kubenswrapper[4758]: I0130 08:44:56.634553 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zh8ld"] Jan 30 08:44:56 crc kubenswrapper[4758]: I0130 08:44:56.745506 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:56 crc kubenswrapper[4758]: I0130 08:44:56.930349 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mdv5w" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.129411 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tqmx\" (UniqueName: \"kubernetes.io/projected/94969b49-36b1-4fcf-9fb3-54b56db5b8f6-kube-api-access-7tqmx\") pod \"94969b49-36b1-4fcf-9fb3-54b56db5b8f6\" (UID: \"94969b49-36b1-4fcf-9fb3-54b56db5b8f6\") " Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.136518 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94969b49-36b1-4fcf-9fb3-54b56db5b8f6-kube-api-access-7tqmx" (OuterVolumeSpecName: "kube-api-access-7tqmx") pod "94969b49-36b1-4fcf-9fb3-54b56db5b8f6" (UID: "94969b49-36b1-4fcf-9fb3-54b56db5b8f6"). InnerVolumeSpecName "kube-api-access-7tqmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.230947 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tqmx\" (UniqueName: \"kubernetes.io/projected/94969b49-36b1-4fcf-9fb3-54b56db5b8f6-kube-api-access-7tqmx\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.306759 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zh8ld" event={"ID":"43724cfc-11f7-4ded-9561-4bde1020015f","Type":"ContainerStarted","Data":"8cd09ee0b6e657488e2aba6443c03994d9742b975270de0bf53057140007c0f5"} Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.306801 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zh8ld" event={"ID":"43724cfc-11f7-4ded-9561-4bde1020015f","Type":"ContainerStarted","Data":"3a8cfbe98dba995e80cdee4a056c92ea86d612522152e62c77b307caa6775c7a"} Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.312102 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerStarted","Data":"6c9b07fded761bfe8c522ce99eb9868da841a9fe8888e977a9db8092456d2992"} Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.312595 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.314143 4758 generic.go:334] "Generic (PLEG): container finished" podID="94969b49-36b1-4fcf-9fb3-54b56db5b8f6" containerID="863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622" exitCode=0 Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.314173 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mdv5w" event={"ID":"94969b49-36b1-4fcf-9fb3-54b56db5b8f6","Type":"ContainerDied","Data":"863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622"} Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.314190 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mdv5w" event={"ID":"94969b49-36b1-4fcf-9fb3-54b56db5b8f6","Type":"ContainerDied","Data":"d531cb5ccc7016645c87fc3ee30629a2dbd9c42a9ab3709a5c8c3fdcb1e1c06f"} Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.314206 4758 scope.go:117] "RemoveContainer" containerID="863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.314208 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mdv5w" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.330849 4758 scope.go:117] "RemoveContainer" containerID="863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622" Jan 30 08:44:57 crc kubenswrapper[4758]: E0130 08:44:57.331568 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622\": container with ID starting with 863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622 not found: ID does not exist" containerID="863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.331679 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622"} err="failed to get container status \"863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622\": rpc error: code = NotFound desc = could not find container \"863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622\": container with ID starting with 863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622 not found: ID does not exist" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.333697 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zh8ld" podStartSLOduration=2.250028483 podStartE2EDuration="2.333678631s" podCreationTimestamp="2026-01-30 08:44:55 +0000 UTC" firstStartedPulling="2026-01-30 08:44:56.656193857 +0000 UTC m=+901.628505408" lastFinishedPulling="2026-01-30 08:44:56.739844005 +0000 UTC m=+901.712155556" observedRunningTime="2026-01-30 08:44:57.324620757 +0000 UTC m=+902.296932328" watchObservedRunningTime="2026-01-30 08:44:57.333678631 +0000 UTC m=+902.305990182" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.359890 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-vfjq6" podStartSLOduration=8.709285519 podStartE2EDuration="21.359866835s" podCreationTimestamp="2026-01-30 08:44:36 +0000 UTC" firstStartedPulling="2026-01-30 08:44:37.251165866 +0000 UTC m=+882.223477427" lastFinishedPulling="2026-01-30 08:44:49.901747192 +0000 UTC m=+894.874058743" observedRunningTime="2026-01-30 08:44:57.356553 +0000 UTC m=+902.328864571" watchObservedRunningTime="2026-01-30 08:44:57.359866835 +0000 UTC m=+902.332178386" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.371589 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mdv5w"] Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.376499 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-mdv5w"] Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.776420 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94969b49-36b1-4fcf-9fb3-54b56db5b8f6" path="/var/lib/kubelet/pods/94969b49-36b1-4fcf-9fb3-54b56db5b8f6/volumes" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.147696 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87"] Jan 30 08:45:00 crc kubenswrapper[4758]: E0130 08:45:00.148196 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94969b49-36b1-4fcf-9fb3-54b56db5b8f6" containerName="registry-server" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.148208 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="94969b49-36b1-4fcf-9fb3-54b56db5b8f6" containerName="registry-server" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.148344 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="94969b49-36b1-4fcf-9fb3-54b56db5b8f6" containerName="registry-server" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.148720 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.150499 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.150922 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.166082 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87"] Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.268200 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76da59ec-7916-4d09-8154-61e9848aaec6-secret-volume\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.268280 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76da59ec-7916-4d09-8154-61e9848aaec6-config-volume\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.268316 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf972\" (UniqueName: \"kubernetes.io/projected/76da59ec-7916-4d09-8154-61e9848aaec6-kube-api-access-cf972\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.369500 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76da59ec-7916-4d09-8154-61e9848aaec6-secret-volume\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.369564 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76da59ec-7916-4d09-8154-61e9848aaec6-config-volume\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.369606 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf972\" (UniqueName: \"kubernetes.io/projected/76da59ec-7916-4d09-8154-61e9848aaec6-kube-api-access-cf972\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.371183 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76da59ec-7916-4d09-8154-61e9848aaec6-config-volume\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.375309 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76da59ec-7916-4d09-8154-61e9848aaec6-secret-volume\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.388593 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf972\" (UniqueName: \"kubernetes.io/projected/76da59ec-7916-4d09-8154-61e9848aaec6-kube-api-access-cf972\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.465517 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.686141 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87"] Jan 30 08:45:01 crc kubenswrapper[4758]: I0130 08:45:01.341023 4758 generic.go:334] "Generic (PLEG): container finished" podID="76da59ec-7916-4d09-8154-61e9848aaec6" containerID="60e0a5bbfedb1bfda46d2d92720410ff62ddecd2757f775d7598cb6c8cd22199" exitCode=0 Jan 30 08:45:01 crc kubenswrapper[4758]: I0130 08:45:01.341154 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" event={"ID":"76da59ec-7916-4d09-8154-61e9848aaec6","Type":"ContainerDied","Data":"60e0a5bbfedb1bfda46d2d92720410ff62ddecd2757f775d7598cb6c8cd22199"} Jan 30 08:45:01 crc kubenswrapper[4758]: I0130 08:45:01.341412 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" event={"ID":"76da59ec-7916-4d09-8154-61e9848aaec6","Type":"ContainerStarted","Data":"39703a042eb17c46b43218f5906dda7812e648cdf0181a6b4dcd2bdae2a32c58"} Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.155747 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.158832 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.205716 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.605675 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.803196 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76da59ec-7916-4d09-8154-61e9848aaec6-config-volume\") pod \"76da59ec-7916-4d09-8154-61e9848aaec6\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.803306 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf972\" (UniqueName: \"kubernetes.io/projected/76da59ec-7916-4d09-8154-61e9848aaec6-kube-api-access-cf972\") pod \"76da59ec-7916-4d09-8154-61e9848aaec6\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.803382 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76da59ec-7916-4d09-8154-61e9848aaec6-secret-volume\") pod \"76da59ec-7916-4d09-8154-61e9848aaec6\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.804225 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76da59ec-7916-4d09-8154-61e9848aaec6-config-volume" (OuterVolumeSpecName: "config-volume") pod "76da59ec-7916-4d09-8154-61e9848aaec6" (UID: "76da59ec-7916-4d09-8154-61e9848aaec6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.810157 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76da59ec-7916-4d09-8154-61e9848aaec6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "76da59ec-7916-4d09-8154-61e9848aaec6" (UID: "76da59ec-7916-4d09-8154-61e9848aaec6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.812438 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76da59ec-7916-4d09-8154-61e9848aaec6-kube-api-access-cf972" (OuterVolumeSpecName: "kube-api-access-cf972") pod "76da59ec-7916-4d09-8154-61e9848aaec6" (UID: "76da59ec-7916-4d09-8154-61e9848aaec6"). InnerVolumeSpecName "kube-api-access-cf972". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.905114 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76da59ec-7916-4d09-8154-61e9848aaec6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.905163 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76da59ec-7916-4d09-8154-61e9848aaec6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.905177 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf972\" (UniqueName: \"kubernetes.io/projected/76da59ec-7916-4d09-8154-61e9848aaec6-kube-api-access-cf972\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:03 crc kubenswrapper[4758]: I0130 08:45:03.353920 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" event={"ID":"76da59ec-7916-4d09-8154-61e9848aaec6","Type":"ContainerDied","Data":"39703a042eb17c46b43218f5906dda7812e648cdf0181a6b4dcd2bdae2a32c58"} Jan 30 08:45:03 crc kubenswrapper[4758]: I0130 08:45:03.353959 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39703a042eb17c46b43218f5906dda7812e648cdf0181a6b4dcd2bdae2a32c58" Jan 30 08:45:03 crc kubenswrapper[4758]: I0130 08:45:03.353954 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:05 crc kubenswrapper[4758]: I0130 08:45:05.933699 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:45:05 crc kubenswrapper[4758]: I0130 08:45:05.934293 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:45:05 crc kubenswrapper[4758]: I0130 08:45:05.957709 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:45:06 crc kubenswrapper[4758]: I0130 08:45:06.392697 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:45:06 crc kubenswrapper[4758]: I0130 08:45:06.574818 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.650028 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56"] Jan 30 08:45:07 crc kubenswrapper[4758]: E0130 08:45:07.650871 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76da59ec-7916-4d09-8154-61e9848aaec6" containerName="collect-profiles" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.650885 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="76da59ec-7916-4d09-8154-61e9848aaec6" containerName="collect-profiles" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.651141 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="76da59ec-7916-4d09-8154-61e9848aaec6" containerName="collect-profiles" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.652707 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.662930 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-94mbk" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.673759 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56"] Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.674146 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-util\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.674221 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-bundle\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.674286 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwvkx\" (UniqueName: \"kubernetes.io/projected/51f6834f-53ed-44f6-ba73-fc7275fcb395-kube-api-access-gwvkx\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.775777 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwvkx\" (UniqueName: \"kubernetes.io/projected/51f6834f-53ed-44f6-ba73-fc7275fcb395-kube-api-access-gwvkx\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.775860 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-util\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.776314 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-util\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.776458 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-bundle\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.776854 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-bundle\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.802162 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwvkx\" (UniqueName: \"kubernetes.io/projected/51f6834f-53ed-44f6-ba73-fc7275fcb395-kube-api-access-gwvkx\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.990818 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:08 crc kubenswrapper[4758]: I0130 08:45:08.377875 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56"] Jan 30 08:45:09 crc kubenswrapper[4758]: I0130 08:45:09.386666 4758 generic.go:334] "Generic (PLEG): container finished" podID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerID="6b78a0c138800ecd404ecb28d5f29d2e973073cee76f846bb827d92ea4b8d4cb" exitCode=0 Jan 30 08:45:09 crc kubenswrapper[4758]: I0130 08:45:09.386703 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" event={"ID":"51f6834f-53ed-44f6-ba73-fc7275fcb395","Type":"ContainerDied","Data":"6b78a0c138800ecd404ecb28d5f29d2e973073cee76f846bb827d92ea4b8d4cb"} Jan 30 08:45:09 crc kubenswrapper[4758]: I0130 08:45:09.388595 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" event={"ID":"51f6834f-53ed-44f6-ba73-fc7275fcb395","Type":"ContainerStarted","Data":"3a2c8d67fbd967c8747b465b2a1ae35a5a44b6a7d8b4c83de875fef0f6929358"} Jan 30 08:45:10 crc kubenswrapper[4758]: I0130 08:45:10.395580 4758 generic.go:334] "Generic (PLEG): container finished" podID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerID="7310ecfc56b2a45dac3366f9f23b18b9a374aef78b95cf199011eaedbed8e93f" exitCode=0 Jan 30 08:45:10 crc kubenswrapper[4758]: I0130 08:45:10.395788 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" event={"ID":"51f6834f-53ed-44f6-ba73-fc7275fcb395","Type":"ContainerDied","Data":"7310ecfc56b2a45dac3366f9f23b18b9a374aef78b95cf199011eaedbed8e93f"} Jan 30 08:45:11 crc kubenswrapper[4758]: I0130 08:45:11.405500 4758 generic.go:334] "Generic (PLEG): container finished" podID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerID="5b20749419c0ea927a0528dc7b0613d70bdb32031042086668807bc5313bb778" exitCode=0 Jan 30 08:45:11 crc kubenswrapper[4758]: I0130 08:45:11.405543 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" event={"ID":"51f6834f-53ed-44f6-ba73-fc7275fcb395","Type":"ContainerDied","Data":"5b20749419c0ea927a0528dc7b0613d70bdb32031042086668807bc5313bb778"} Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.651859 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.840664 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwvkx\" (UniqueName: \"kubernetes.io/projected/51f6834f-53ed-44f6-ba73-fc7275fcb395-kube-api-access-gwvkx\") pod \"51f6834f-53ed-44f6-ba73-fc7275fcb395\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.840717 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-bundle\") pod \"51f6834f-53ed-44f6-ba73-fc7275fcb395\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.840794 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-util\") pod \"51f6834f-53ed-44f6-ba73-fc7275fcb395\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.841771 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-bundle" (OuterVolumeSpecName: "bundle") pod "51f6834f-53ed-44f6-ba73-fc7275fcb395" (UID: "51f6834f-53ed-44f6-ba73-fc7275fcb395"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.855326 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f6834f-53ed-44f6-ba73-fc7275fcb395-kube-api-access-gwvkx" (OuterVolumeSpecName: "kube-api-access-gwvkx") pod "51f6834f-53ed-44f6-ba73-fc7275fcb395" (UID: "51f6834f-53ed-44f6-ba73-fc7275fcb395"). InnerVolumeSpecName "kube-api-access-gwvkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.857897 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-util" (OuterVolumeSpecName: "util") pod "51f6834f-53ed-44f6-ba73-fc7275fcb395" (UID: "51f6834f-53ed-44f6-ba73-fc7275fcb395"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.942349 4758 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-util\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.942384 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwvkx\" (UniqueName: \"kubernetes.io/projected/51f6834f-53ed-44f6-ba73-fc7275fcb395-kube-api-access-gwvkx\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.942399 4758 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:13 crc kubenswrapper[4758]: I0130 08:45:13.421582 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" event={"ID":"51f6834f-53ed-44f6-ba73-fc7275fcb395","Type":"ContainerDied","Data":"3a2c8d67fbd967c8747b465b2a1ae35a5a44b6a7d8b4c83de875fef0f6929358"} Jan 30 08:45:13 crc kubenswrapper[4758]: I0130 08:45:13.421621 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:13 crc kubenswrapper[4758]: I0130 08:45:13.421635 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a2c8d67fbd967c8747b465b2a1ae35a5a44b6a7d8b4c83de875fef0f6929358" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.232124 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj"] Jan 30 08:45:15 crc kubenswrapper[4758]: E0130 08:45:15.233593 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerName="util" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.233668 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerName="util" Jan 30 08:45:15 crc kubenswrapper[4758]: E0130 08:45:15.233729 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerName="pull" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.233786 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerName="pull" Jan 30 08:45:15 crc kubenswrapper[4758]: E0130 08:45:15.233859 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerName="extract" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.233914 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerName="extract" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.234097 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerName="extract" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.234608 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.236930 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-cn2kb" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.271957 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj"] Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.371266 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b46sd\" (UniqueName: \"kubernetes.io/projected/10f9b3c9-c691-403e-801f-420bc2701a95-kube-api-access-b46sd\") pod \"openstack-operator-controller-init-744b85dfd5-tlqzj\" (UID: \"10f9b3c9-c691-403e-801f-420bc2701a95\") " pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.472155 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b46sd\" (UniqueName: \"kubernetes.io/projected/10f9b3c9-c691-403e-801f-420bc2701a95-kube-api-access-b46sd\") pod \"openstack-operator-controller-init-744b85dfd5-tlqzj\" (UID: \"10f9b3c9-c691-403e-801f-420bc2701a95\") " pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.492060 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b46sd\" (UniqueName: \"kubernetes.io/projected/10f9b3c9-c691-403e-801f-420bc2701a95-kube-api-access-b46sd\") pod \"openstack-operator-controller-init-744b85dfd5-tlqzj\" (UID: \"10f9b3c9-c691-403e-801f-420bc2701a95\") " pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.554220 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" Jan 30 08:45:16 crc kubenswrapper[4758]: I0130 08:45:16.088694 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj"] Jan 30 08:45:16 crc kubenswrapper[4758]: I0130 08:45:16.451216 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" event={"ID":"10f9b3c9-c691-403e-801f-420bc2701a95","Type":"ContainerStarted","Data":"968aa089babba809e58ff54d68160c8e9636488db2cab397ece41f90e6e28500"} Jan 30 08:45:22 crc kubenswrapper[4758]: I0130 08:45:22.387247 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:45:22 crc kubenswrapper[4758]: I0130 08:45:22.387845 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:45:23 crc kubenswrapper[4758]: I0130 08:45:23.515899 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" event={"ID":"10f9b3c9-c691-403e-801f-420bc2701a95","Type":"ContainerStarted","Data":"5f4242a3c6b9c42575f0f215a6d7484a648d1ad6376bba08b785a13514c7f1c3"} Jan 30 08:45:23 crc kubenswrapper[4758]: I0130 08:45:23.516241 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" Jan 30 08:45:23 crc kubenswrapper[4758]: I0130 08:45:23.546250 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" podStartSLOduration=1.7005568119999999 podStartE2EDuration="8.546230595s" podCreationTimestamp="2026-01-30 08:45:15 +0000 UTC" firstStartedPulling="2026-01-30 08:45:16.099163699 +0000 UTC m=+921.071475250" lastFinishedPulling="2026-01-30 08:45:22.944837482 +0000 UTC m=+927.917149033" observedRunningTime="2026-01-30 08:45:23.541581859 +0000 UTC m=+928.513893410" watchObservedRunningTime="2026-01-30 08:45:23.546230595 +0000 UTC m=+928.518542156" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.262576 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2t2jt"] Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.264228 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.282218 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2t2jt"] Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.397772 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbd2\" (UniqueName: \"kubernetes.io/projected/364bc201-959b-4237-a379-9596b1223cbc-kube-api-access-xxbd2\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.397898 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-catalog-content\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.397976 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-utilities\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.499319 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-catalog-content\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.499561 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-utilities\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.499690 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxbd2\" (UniqueName: \"kubernetes.io/projected/364bc201-959b-4237-a379-9596b1223cbc-kube-api-access-xxbd2\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.500010 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-catalog-content\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.500229 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-utilities\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.531009 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxbd2\" (UniqueName: \"kubernetes.io/projected/364bc201-959b-4237-a379-9596b1223cbc-kube-api-access-xxbd2\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.579897 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:33 crc kubenswrapper[4758]: I0130 08:45:33.230205 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2t2jt"] Jan 30 08:45:33 crc kubenswrapper[4758]: W0130 08:45:33.235110 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod364bc201_959b_4237_a379_9596b1223cbc.slice/crio-4eeb4bd6773839ff5fce1d2a6d8d8751bd0f78cc64649777cb14023c1c62762d WatchSource:0}: Error finding container 4eeb4bd6773839ff5fce1d2a6d8d8751bd0f78cc64649777cb14023c1c62762d: Status 404 returned error can't find the container with id 4eeb4bd6773839ff5fce1d2a6d8d8751bd0f78cc64649777cb14023c1c62762d Jan 30 08:45:33 crc kubenswrapper[4758]: I0130 08:45:33.574703 4758 generic.go:334] "Generic (PLEG): container finished" podID="364bc201-959b-4237-a379-9596b1223cbc" containerID="69a7ca6a35440c8d6a6b5c64641df205b1f58c73899c4a6f6ad67e7fef9baab8" exitCode=0 Jan 30 08:45:33 crc kubenswrapper[4758]: I0130 08:45:33.574759 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t2jt" event={"ID":"364bc201-959b-4237-a379-9596b1223cbc","Type":"ContainerDied","Data":"69a7ca6a35440c8d6a6b5c64641df205b1f58c73899c4a6f6ad67e7fef9baab8"} Jan 30 08:45:33 crc kubenswrapper[4758]: I0130 08:45:33.574798 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t2jt" event={"ID":"364bc201-959b-4237-a379-9596b1223cbc","Type":"ContainerStarted","Data":"4eeb4bd6773839ff5fce1d2a6d8d8751bd0f78cc64649777cb14023c1c62762d"} Jan 30 08:45:35 crc kubenswrapper[4758]: I0130 08:45:35.557023 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" Jan 30 08:45:35 crc kubenswrapper[4758]: I0130 08:45:35.589018 4758 generic.go:334] "Generic (PLEG): container finished" podID="364bc201-959b-4237-a379-9596b1223cbc" containerID="4ac71eee89e7259c48423a040dbed7ad9542787e6ed5a91e7cacd0c60611852e" exitCode=0 Jan 30 08:45:35 crc kubenswrapper[4758]: I0130 08:45:35.589081 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t2jt" event={"ID":"364bc201-959b-4237-a379-9596b1223cbc","Type":"ContainerDied","Data":"4ac71eee89e7259c48423a040dbed7ad9542787e6ed5a91e7cacd0c60611852e"} Jan 30 08:45:36 crc kubenswrapper[4758]: I0130 08:45:36.596812 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t2jt" event={"ID":"364bc201-959b-4237-a379-9596b1223cbc","Type":"ContainerStarted","Data":"cf06190608af651f83eb58cbffe05edad66dc7e1e8fac5df28ea762a91e937ce"} Jan 30 08:45:36 crc kubenswrapper[4758]: I0130 08:45:36.617427 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2t2jt" podStartSLOduration=2.014991932 podStartE2EDuration="4.617408289s" podCreationTimestamp="2026-01-30 08:45:32 +0000 UTC" firstStartedPulling="2026-01-30 08:45:33.57610463 +0000 UTC m=+938.548416181" lastFinishedPulling="2026-01-30 08:45:36.178520977 +0000 UTC m=+941.150832538" observedRunningTime="2026-01-30 08:45:36.614006161 +0000 UTC m=+941.586317712" watchObservedRunningTime="2026-01-30 08:45:36.617408289 +0000 UTC m=+941.589719840" Jan 30 08:45:42 crc kubenswrapper[4758]: I0130 08:45:42.580984 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:42 crc kubenswrapper[4758]: I0130 08:45:42.581545 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:42 crc kubenswrapper[4758]: I0130 08:45:42.661390 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:42 crc kubenswrapper[4758]: I0130 08:45:42.796208 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:42 crc kubenswrapper[4758]: I0130 08:45:42.986252 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2t2jt"] Jan 30 08:45:44 crc kubenswrapper[4758]: I0130 08:45:44.710559 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2t2jt" podUID="364bc201-959b-4237-a379-9596b1223cbc" containerName="registry-server" containerID="cri-o://cf06190608af651f83eb58cbffe05edad66dc7e1e8fac5df28ea762a91e937ce" gracePeriod=2 Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.723143 4758 generic.go:334] "Generic (PLEG): container finished" podID="364bc201-959b-4237-a379-9596b1223cbc" containerID="cf06190608af651f83eb58cbffe05edad66dc7e1e8fac5df28ea762a91e937ce" exitCode=0 Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.723247 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t2jt" event={"ID":"364bc201-959b-4237-a379-9596b1223cbc","Type":"ContainerDied","Data":"cf06190608af651f83eb58cbffe05edad66dc7e1e8fac5df28ea762a91e937ce"} Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.723555 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t2jt" event={"ID":"364bc201-959b-4237-a379-9596b1223cbc","Type":"ContainerDied","Data":"4eeb4bd6773839ff5fce1d2a6d8d8751bd0f78cc64649777cb14023c1c62762d"} Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.723577 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eeb4bd6773839ff5fce1d2a6d8d8751bd0f78cc64649777cb14023c1c62762d" Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.745381 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.820599 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxbd2\" (UniqueName: \"kubernetes.io/projected/364bc201-959b-4237-a379-9596b1223cbc-kube-api-access-xxbd2\") pod \"364bc201-959b-4237-a379-9596b1223cbc\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.820677 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-utilities\") pod \"364bc201-959b-4237-a379-9596b1223cbc\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.820863 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-catalog-content\") pod \"364bc201-959b-4237-a379-9596b1223cbc\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.826244 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-utilities" (OuterVolumeSpecName: "utilities") pod "364bc201-959b-4237-a379-9596b1223cbc" (UID: "364bc201-959b-4237-a379-9596b1223cbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.860480 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/364bc201-959b-4237-a379-9596b1223cbc-kube-api-access-xxbd2" (OuterVolumeSpecName: "kube-api-access-xxbd2") pod "364bc201-959b-4237-a379-9596b1223cbc" (UID: "364bc201-959b-4237-a379-9596b1223cbc"). InnerVolumeSpecName "kube-api-access-xxbd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.896253 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "364bc201-959b-4237-a379-9596b1223cbc" (UID: "364bc201-959b-4237-a379-9596b1223cbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.935907 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.935953 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxbd2\" (UniqueName: \"kubernetes.io/projected/364bc201-959b-4237-a379-9596b1223cbc-kube-api-access-xxbd2\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.935968 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:46 crc kubenswrapper[4758]: I0130 08:45:46.728178 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:46 crc kubenswrapper[4758]: I0130 08:45:46.757377 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2t2jt"] Jan 30 08:45:46 crc kubenswrapper[4758]: I0130 08:45:46.761684 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2t2jt"] Jan 30 08:45:47 crc kubenswrapper[4758]: I0130 08:45:47.774438 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="364bc201-959b-4237-a379-9596b1223cbc" path="/var/lib/kubelet/pods/364bc201-959b-4237-a379-9596b1223cbc/volumes" Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.387296 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.387357 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.387421 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.388073 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69b903b761c0949dbe1210bdef2b095568ae802b78c69042a2686b209bdd29e6"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.388138 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://69b903b761c0949dbe1210bdef2b095568ae802b78c69042a2686b209bdd29e6" gracePeriod=600 Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.786440 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="69b903b761c0949dbe1210bdef2b095568ae802b78c69042a2686b209bdd29e6" exitCode=0 Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.786722 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"69b903b761c0949dbe1210bdef2b095568ae802b78c69042a2686b209bdd29e6"} Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.786747 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"30f01f1e437d64a8924bada24b4f475f6ae60692a25492cb418ecb9bb3c281c2"} Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.786762 4758 scope.go:117] "RemoveContainer" containerID="deb29cef8b897137735d2465c08026013481066ac7d08e4c83f4d9efbbed9a89" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.852140 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq"] Jan 30 08:45:53 crc kubenswrapper[4758]: E0130 08:45:53.852648 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364bc201-959b-4237-a379-9596b1223cbc" containerName="extract-content" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.852660 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="364bc201-959b-4237-a379-9596b1223cbc" containerName="extract-content" Jan 30 08:45:53 crc kubenswrapper[4758]: E0130 08:45:53.852670 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364bc201-959b-4237-a379-9596b1223cbc" containerName="extract-utilities" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.852677 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="364bc201-959b-4237-a379-9596b1223cbc" containerName="extract-utilities" Jan 30 08:45:53 crc kubenswrapper[4758]: E0130 08:45:53.852692 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364bc201-959b-4237-a379-9596b1223cbc" containerName="registry-server" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.852699 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="364bc201-959b-4237-a379-9596b1223cbc" containerName="registry-server" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.852809 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="364bc201-959b-4237-a379-9596b1223cbc" containerName="registry-server" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.853223 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.861083 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.861852 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.864096 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-st9mx" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.873487 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-xnjg2" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.879914 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.880577 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.883573 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-r4nwl" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.895893 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.906514 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.922139 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.922965 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.925954 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.932691 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kggjc\" (UniqueName: \"kubernetes.io/projected/1c4d1258-0416-49d0-a3a5-6ece70dc0c46-kube-api-access-kggjc\") pod \"barbican-operator-controller-manager-566c8844c5-6nj4p\" (UID: \"1c4d1258-0416-49d0-a3a5-6ece70dc0c46\") " pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.932819 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-486cf\" (UniqueName: \"kubernetes.io/projected/62b3bb0d-894a-4cb1-b644-d42f3cba98d7-kube-api-access-486cf\") pod \"cinder-operator-controller-manager-5f9bbdc844-6cgsq\" (UID: \"62b3bb0d-894a-4cb1-b644-d42f3cba98d7\") " pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.933309 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-kbt6n" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.948296 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.977091 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-54985f5875-jct4c"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.977911 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.981331 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-xldcc" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.982492 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.983532 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.984959 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-xczqj" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.007103 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.010839 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-54985f5875-jct4c"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.033867 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7zck\" (UniqueName: \"kubernetes.io/projected/dc189df6-25bc-4d6e-aa30-05ce0db12721-kube-api-access-s7zck\") pod \"designate-operator-controller-manager-8f4c5cb64-cp9km\" (UID: \"dc189df6-25bc-4d6e-aa30-05ce0db12721\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.033940 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kggjc\" (UniqueName: \"kubernetes.io/projected/1c4d1258-0416-49d0-a3a5-6ece70dc0c46-kube-api-access-kggjc\") pod \"barbican-operator-controller-manager-566c8844c5-6nj4p\" (UID: \"1c4d1258-0416-49d0-a3a5-6ece70dc0c46\") " pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.034069 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s97nc\" (UniqueName: \"kubernetes.io/projected/9579fc9d-6eae-4249-ac43-35144ed58bed-kube-api-access-s97nc\") pod \"heat-operator-controller-manager-54985f5875-jct4c\" (UID: \"9579fc9d-6eae-4249-ac43-35144ed58bed\") " pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.034101 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-486cf\" (UniqueName: \"kubernetes.io/projected/62b3bb0d-894a-4cb1-b644-d42f3cba98d7-kube-api-access-486cf\") pod \"cinder-operator-controller-manager-5f9bbdc844-6cgsq\" (UID: \"62b3bb0d-894a-4cb1-b644-d42f3cba98d7\") " pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.034125 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mknpj\" (UniqueName: \"kubernetes.io/projected/fe68673c-8979-46ee-a4aa-f95bcd7b4e8a-kube-api-access-mknpj\") pod \"glance-operator-controller-manager-784f59d4f4-sw42x\" (UID: \"fe68673c-8979-46ee-a4aa-f95bcd7b4e8a\") " pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.076089 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-5k6df"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.076972 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.087880 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.088153 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-mxwhp" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.103906 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-486cf\" (UniqueName: \"kubernetes.io/projected/62b3bb0d-894a-4cb1-b644-d42f3cba98d7-kube-api-access-486cf\") pod \"cinder-operator-controller-manager-5f9bbdc844-6cgsq\" (UID: \"62b3bb0d-894a-4cb1-b644-d42f3cba98d7\") " pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.106288 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.110794 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.123567 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-zdbjv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.136417 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.137687 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b9mk\" (UniqueName: \"kubernetes.io/projected/ef31968c-db2e-4083-a08f-19a8daf0ac2d-kube-api-access-2b9mk\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.137840 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s97nc\" (UniqueName: \"kubernetes.io/projected/9579fc9d-6eae-4249-ac43-35144ed58bed-kube-api-access-s97nc\") pod \"heat-operator-controller-manager-54985f5875-jct4c\" (UID: \"9579fc9d-6eae-4249-ac43-35144ed58bed\") " pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.137881 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mknpj\" (UniqueName: \"kubernetes.io/projected/fe68673c-8979-46ee-a4aa-f95bcd7b4e8a-kube-api-access-mknpj\") pod \"glance-operator-controller-manager-784f59d4f4-sw42x\" (UID: \"fe68673c-8979-46ee-a4aa-f95bcd7b4e8a\") " pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.137910 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlcl4\" (UniqueName: \"kubernetes.io/projected/b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2-kube-api-access-hlcl4\") pod \"horizon-operator-controller-manager-5fb775575f-vjdn9\" (UID: \"b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.137942 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.137968 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7zck\" (UniqueName: \"kubernetes.io/projected/dc189df6-25bc-4d6e-aa30-05ce0db12721-kube-api-access-s7zck\") pod \"designate-operator-controller-manager-8f4c5cb64-cp9km\" (UID: \"dc189df6-25bc-4d6e-aa30-05ce0db12721\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.140720 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kggjc\" (UniqueName: \"kubernetes.io/projected/1c4d1258-0416-49d0-a3a5-6ece70dc0c46-kube-api-access-kggjc\") pod \"barbican-operator-controller-manager-566c8844c5-6nj4p\" (UID: \"1c4d1258-0416-49d0-a3a5-6ece70dc0c46\") " pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.156579 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-5k6df"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.168630 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.169340 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.169398 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.175279 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-6k9xw" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.178096 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.191436 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.197679 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.198854 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.212735 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mknpj\" (UniqueName: \"kubernetes.io/projected/fe68673c-8979-46ee-a4aa-f95bcd7b4e8a-kube-api-access-mknpj\") pod \"glance-operator-controller-manager-784f59d4f4-sw42x\" (UID: \"fe68673c-8979-46ee-a4aa-f95bcd7b4e8a\") " pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.222663 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-wn9f9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.234616 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7zck\" (UniqueName: \"kubernetes.io/projected/dc189df6-25bc-4d6e-aa30-05ce0db12721-kube-api-access-s7zck\") pod \"designate-operator-controller-manager-8f4c5cb64-cp9km\" (UID: \"dc189df6-25bc-4d6e-aa30-05ce0db12721\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.236326 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s97nc\" (UniqueName: \"kubernetes.io/projected/9579fc9d-6eae-4249-ac43-35144ed58bed-kube-api-access-s97nc\") pod \"heat-operator-controller-manager-54985f5875-jct4c\" (UID: \"9579fc9d-6eae-4249-ac43-35144ed58bed\") " pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.237266 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.237988 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.238708 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlcl4\" (UniqueName: \"kubernetes.io/projected/b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2-kube-api-access-hlcl4\") pod \"horizon-operator-controller-manager-5fb775575f-vjdn9\" (UID: \"b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.238758 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.238794 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22cl2\" (UniqueName: \"kubernetes.io/projected/5c2a7d2b-62a1-468b-a3b3-fe77698a41a2-kube-api-access-22cl2\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-4jmpb\" (UID: \"5c2a7d2b-62a1-468b-a3b3-fe77698a41a2\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.238820 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b9mk\" (UniqueName: \"kubernetes.io/projected/ef31968c-db2e-4083-a08f-19a8daf0ac2d-kube-api-access-2b9mk\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.238841 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tfdx\" (UniqueName: \"kubernetes.io/projected/b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7-kube-api-access-4tfdx\") pod \"keystone-operator-controller-manager-6c9d56f9bd-h89d9\" (UID: \"b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7\") " pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" Jan 30 08:45:54 crc kubenswrapper[4758]: E0130 08:45:54.239206 4758 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 08:45:54 crc kubenswrapper[4758]: E0130 08:45:54.239252 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert podName:ef31968c-db2e-4083-a08f-19a8daf0ac2d nodeName:}" failed. No retries permitted until 2026-01-30 08:45:54.739236335 +0000 UTC m=+959.711547886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert") pod "infra-operator-controller-manager-79955696d6-5k6df" (UID: "ef31968c-db2e-4083-a08f-19a8daf0ac2d") : secret "infra-operator-webhook-server-cert" not found Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.256504 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.257231 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-cqxpc" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.290187 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.311853 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b9mk\" (UniqueName: \"kubernetes.io/projected/ef31968c-db2e-4083-a08f-19a8daf0ac2d-kube-api-access-2b9mk\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.324264 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.337282 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.347749 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g942j\" (UniqueName: \"kubernetes.io/projected/ac7c91ce-d4d9-4754-9828-a43140218228-kube-api-access-g942j\") pod \"manila-operator-controller-manager-74954f9f78-kxmjn\" (UID: \"ac7c91ce-d4d9-4754-9828-a43140218228\") " pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.347853 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44wzb\" (UniqueName: \"kubernetes.io/projected/0565da5c-02e0-409f-b801-e06c3e79ef47-kube-api-access-44wzb\") pod \"mariadb-operator-controller-manager-67bf948998-c5d9k\" (UID: \"0565da5c-02e0-409f-b801-e06c3e79ef47\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.347877 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22cl2\" (UniqueName: \"kubernetes.io/projected/5c2a7d2b-62a1-468b-a3b3-fe77698a41a2-kube-api-access-22cl2\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-4jmpb\" (UID: \"5c2a7d2b-62a1-468b-a3b3-fe77698a41a2\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.347911 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tfdx\" (UniqueName: \"kubernetes.io/projected/b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7-kube-api-access-4tfdx\") pod \"keystone-operator-controller-manager-6c9d56f9bd-h89d9\" (UID: \"b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7\") " pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.359357 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlcl4\" (UniqueName: \"kubernetes.io/projected/b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2-kube-api-access-hlcl4\") pod \"horizon-operator-controller-manager-5fb775575f-vjdn9\" (UID: \"b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.365905 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.366785 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.375502 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-w6x8z" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.396375 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tfdx\" (UniqueName: \"kubernetes.io/projected/b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7-kube-api-access-4tfdx\") pod \"keystone-operator-controller-manager-6c9d56f9bd-h89d9\" (UID: \"b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7\") " pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.403977 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22cl2\" (UniqueName: \"kubernetes.io/projected/5c2a7d2b-62a1-468b-a3b3-fe77698a41a2-kube-api-access-22cl2\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-4jmpb\" (UID: \"5c2a7d2b-62a1-468b-a3b3-fe77698a41a2\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.419710 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.420759 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.467686 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44wzb\" (UniqueName: \"kubernetes.io/projected/0565da5c-02e0-409f-b801-e06c3e79ef47-kube-api-access-44wzb\") pod \"mariadb-operator-controller-manager-67bf948998-c5d9k\" (UID: \"0565da5c-02e0-409f-b801-e06c3e79ef47\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.468026 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g942j\" (UniqueName: \"kubernetes.io/projected/ac7c91ce-d4d9-4754-9828-a43140218228-kube-api-access-g942j\") pod \"manila-operator-controller-manager-74954f9f78-kxmjn\" (UID: \"ac7c91ce-d4d9-4754-9828-a43140218228\") " pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.470449 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.473486 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrlm9\" (UniqueName: \"kubernetes.io/projected/b6d614d3-1ced-4b27-bd91-8edd410e5fc5-kube-api-access-qrlm9\") pod \"neutron-operator-controller-manager-6cfc4f6754-7s6n2\" (UID: \"b6d614d3-1ced-4b27-bd91-8edd410e5fc5\") " pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.482230 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.487409 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rc2gx" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.512898 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.576970 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g942j\" (UniqueName: \"kubernetes.io/projected/ac7c91ce-d4d9-4754-9828-a43140218228-kube-api-access-g942j\") pod \"manila-operator-controller-manager-74954f9f78-kxmjn\" (UID: \"ac7c91ce-d4d9-4754-9828-a43140218228\") " pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.586780 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrlm9\" (UniqueName: \"kubernetes.io/projected/b6d614d3-1ced-4b27-bd91-8edd410e5fc5-kube-api-access-qrlm9\") pod \"neutron-operator-controller-manager-6cfc4f6754-7s6n2\" (UID: \"b6d614d3-1ced-4b27-bd91-8edd410e5fc5\") " pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.596736 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsjts\" (UniqueName: \"kubernetes.io/projected/0b006f91-5b27-4342-935b-c7a7f174c03b-kube-api-access-wsjts\") pod \"nova-operator-controller-manager-67f5956bc9-hp2mv\" (UID: \"0b006f91-5b27-4342-935b-c7a7f174c03b\") " pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.591934 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44wzb\" (UniqueName: \"kubernetes.io/projected/0565da5c-02e0-409f-b801-e06c3e79ef47-kube-api-access-44wzb\") pod \"mariadb-operator-controller-manager-67bf948998-c5d9k\" (UID: \"0565da5c-02e0-409f-b801-e06c3e79ef47\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.597053 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.605006 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.632748 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.635115 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.636018 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.648120 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.658424 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.660781 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.660897 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.665673 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrlm9\" (UniqueName: \"kubernetes.io/projected/b6d614d3-1ced-4b27-bd91-8edd410e5fc5-kube-api-access-qrlm9\") pod \"neutron-operator-controller-manager-6cfc4f6754-7s6n2\" (UID: \"b6d614d3-1ced-4b27-bd91-8edd410e5fc5\") " pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.669118 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.670482 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-ptnjs" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.676629 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-mj5td" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.677166 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.677725 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.678600 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.682474 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-8jh4b" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.697976 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.699115 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.699738 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsjts\" (UniqueName: \"kubernetes.io/projected/0b006f91-5b27-4342-935b-c7a7f174c03b-kube-api-access-wsjts\") pod \"nova-operator-controller-manager-67f5956bc9-hp2mv\" (UID: \"0b006f91-5b27-4342-935b-c7a7f174c03b\") " pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.705962 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.717077 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.718343 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.724858 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-dz8rk" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.732944 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.733848 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.736766 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-4v29k" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.743788 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsjts\" (UniqueName: \"kubernetes.io/projected/0b006f91-5b27-4342-935b-c7a7f174c03b-kube-api-access-wsjts\") pod \"nova-operator-controller-manager-67f5956bc9-hp2mv\" (UID: \"0b006f91-5b27-4342-935b-c7a7f174c03b\") " pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.744791 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.745770 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.755139 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-24gx9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.760481 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.773510 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.795917 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.802146 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zhgb\" (UniqueName: \"kubernetes.io/projected/8cb0c6cc-e254-4dae-b433-397504fba6dc-kube-api-access-2zhgb\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.802210 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v898l\" (UniqueName: \"kubernetes.io/projected/a104527b-98dc-4120-91b5-6e7e9466b9a3-kube-api-access-v898l\") pod \"octavia-operator-controller-manager-694c6dcf95-kdnzv\" (UID: \"a104527b-98dc-4120-91b5-6e7e9466b9a3\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.802237 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.802301 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf6cb\" (UniqueName: \"kubernetes.io/projected/03b807f9-cd53-4189-b7df-09c5ea5fdf53-kube-api-access-lf6cb\") pod \"ovn-operator-controller-manager-788c46999f-qlc8l\" (UID: \"03b807f9-cd53-4189-b7df-09c5ea5fdf53\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.802320 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:54 crc kubenswrapper[4758]: E0130 08:45:54.805335 4758 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 08:45:54 crc kubenswrapper[4758]: E0130 08:45:54.805419 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert podName:ef31968c-db2e-4083-a08f-19a8daf0ac2d nodeName:}" failed. No retries permitted until 2026-01-30 08:45:55.805381637 +0000 UTC m=+960.777693188 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert") pod "infra-operator-controller-manager-79955696d6-5k6df" (UID: "ef31968c-db2e-4083-a08f-19a8daf0ac2d") : secret "infra-operator-webhook-server-cert" not found Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.806477 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.810129 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.811434 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.820235 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-gc64n" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.839109 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.840174 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.861616 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-phszr" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.903079 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.917860 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qkvz\" (UniqueName: \"kubernetes.io/projected/b271df00-e9f2-4c58-94e7-22ea4b7d7eaf-kube-api-access-8qkvz\") pod \"telemetry-operator-controller-manager-76cd99594-xhszj\" (UID: \"b271df00-e9f2-4c58-94e7-22ea4b7d7eaf\") " pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.917923 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zhgb\" (UniqueName: \"kubernetes.io/projected/8cb0c6cc-e254-4dae-b433-397504fba6dc-kube-api-access-2zhgb\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.917971 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v898l\" (UniqueName: \"kubernetes.io/projected/a104527b-98dc-4120-91b5-6e7e9466b9a3-kube-api-access-v898l\") pod \"octavia-operator-controller-manager-694c6dcf95-kdnzv\" (UID: \"a104527b-98dc-4120-91b5-6e7e9466b9a3\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.918013 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctj6h\" (UniqueName: \"kubernetes.io/projected/e4787454-8070-449e-a7d0-2ff179eaaff3-kube-api-access-ctj6h\") pod \"swift-operator-controller-manager-7d4f9d9c9b-t7dht\" (UID: \"e4787454-8070-449e-a7d0-2ff179eaaff3\") " pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.918059 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grlxx\" (UniqueName: \"kubernetes.io/projected/178c1ff9-1a2a-4c4a-8258-89c267a5d0aa-kube-api-access-grlxx\") pod \"placement-operator-controller-manager-5b964cf4cd-vzcpf\" (UID: \"178c1ff9-1a2a-4c4a-8258-89c267a5d0aa\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.918077 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k9sw\" (UniqueName: \"kubernetes.io/projected/fe039ec9-aaec-4e17-8eac-c7719245ba4d-kube-api-access-6k9sw\") pod \"test-operator-controller-manager-56f8bfcd9f-pmvv4\" (UID: \"fe039ec9-aaec-4e17-8eac-c7719245ba4d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.918107 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf6cb\" (UniqueName: \"kubernetes.io/projected/03b807f9-cd53-4189-b7df-09c5ea5fdf53-kube-api-access-lf6cb\") pod \"ovn-operator-controller-manager-788c46999f-qlc8l\" (UID: \"03b807f9-cd53-4189-b7df-09c5ea5fdf53\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.918123 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:54 crc kubenswrapper[4758]: E0130 08:45:54.918246 4758 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:45:54 crc kubenswrapper[4758]: E0130 08:45:54.918287 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert podName:8cb0c6cc-e254-4dae-b433-397504fba6dc nodeName:}" failed. No retries permitted until 2026-01-30 08:45:55.418273735 +0000 UTC m=+960.390585286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" (UID: "8cb0c6cc-e254-4dae-b433-397504fba6dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.925773 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc"] Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.020264 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cm6c\" (UniqueName: \"kubernetes.io/projected/73790ffa-61b1-489c-94c9-3934af94185f-kube-api-access-7cm6c\") pod \"watcher-operator-controller-manager-5bf648c946-ngzmc\" (UID: \"73790ffa-61b1-489c-94c9-3934af94185f\") " pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.020331 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grlxx\" (UniqueName: \"kubernetes.io/projected/178c1ff9-1a2a-4c4a-8258-89c267a5d0aa-kube-api-access-grlxx\") pod \"placement-operator-controller-manager-5b964cf4cd-vzcpf\" (UID: \"178c1ff9-1a2a-4c4a-8258-89c267a5d0aa\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.020374 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k9sw\" (UniqueName: \"kubernetes.io/projected/fe039ec9-aaec-4e17-8eac-c7719245ba4d-kube-api-access-6k9sw\") pod \"test-operator-controller-manager-56f8bfcd9f-pmvv4\" (UID: \"fe039ec9-aaec-4e17-8eac-c7719245ba4d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.020452 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qkvz\" (UniqueName: \"kubernetes.io/projected/b271df00-e9f2-4c58-94e7-22ea4b7d7eaf-kube-api-access-8qkvz\") pod \"telemetry-operator-controller-manager-76cd99594-xhszj\" (UID: \"b271df00-e9f2-4c58-94e7-22ea4b7d7eaf\") " pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.020560 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctj6h\" (UniqueName: \"kubernetes.io/projected/e4787454-8070-449e-a7d0-2ff179eaaff3-kube-api-access-ctj6h\") pod \"swift-operator-controller-manager-7d4f9d9c9b-t7dht\" (UID: \"e4787454-8070-449e-a7d0-2ff179eaaff3\") " pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.024008 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zhgb\" (UniqueName: \"kubernetes.io/projected/8cb0c6cc-e254-4dae-b433-397504fba6dc-kube-api-access-2zhgb\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.070844 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grlxx\" (UniqueName: \"kubernetes.io/projected/178c1ff9-1a2a-4c4a-8258-89c267a5d0aa-kube-api-access-grlxx\") pod \"placement-operator-controller-manager-5b964cf4cd-vzcpf\" (UID: \"178c1ff9-1a2a-4c4a-8258-89c267a5d0aa\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.124631 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cm6c\" (UniqueName: \"kubernetes.io/projected/73790ffa-61b1-489c-94c9-3934af94185f-kube-api-access-7cm6c\") pod \"watcher-operator-controller-manager-5bf648c946-ngzmc\" (UID: \"73790ffa-61b1-489c-94c9-3934af94185f\") " pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.147386 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qkvz\" (UniqueName: \"kubernetes.io/projected/b271df00-e9f2-4c58-94e7-22ea4b7d7eaf-kube-api-access-8qkvz\") pod \"telemetry-operator-controller-manager-76cd99594-xhszj\" (UID: \"b271df00-e9f2-4c58-94e7-22ea4b7d7eaf\") " pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.172842 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.282127 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.301087 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k9sw\" (UniqueName: \"kubernetes.io/projected/fe039ec9-aaec-4e17-8eac-c7719245ba4d-kube-api-access-6k9sw\") pod \"test-operator-controller-manager-56f8bfcd9f-pmvv4\" (UID: \"fe039ec9-aaec-4e17-8eac-c7719245ba4d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.320956 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf6cb\" (UniqueName: \"kubernetes.io/projected/03b807f9-cd53-4189-b7df-09c5ea5fdf53-kube-api-access-lf6cb\") pod \"ovn-operator-controller-manager-788c46999f-qlc8l\" (UID: \"03b807f9-cd53-4189-b7df-09c5ea5fdf53\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.332828 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.376205 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr"] Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.377840 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.378836 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.412895 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v898l\" (UniqueName: \"kubernetes.io/projected/a104527b-98dc-4120-91b5-6e7e9466b9a3-kube-api-access-v898l\") pod \"octavia-operator-controller-manager-694c6dcf95-kdnzv\" (UID: \"a104527b-98dc-4120-91b5-6e7e9466b9a3\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.438024 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cm6c\" (UniqueName: \"kubernetes.io/projected/73790ffa-61b1-489c-94c9-3934af94185f-kube-api-access-7cm6c\") pod \"watcher-operator-controller-manager-5bf648c946-ngzmc\" (UID: \"73790ffa-61b1-489c-94c9-3934af94185f\") " pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.438829 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-rj47s" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.438978 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.439090 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.441192 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6kdn\" (UniqueName: \"kubernetes.io/projected/0c09af13-f67b-4306-9039-f02d5f9e2f53-kube-api-access-v6kdn\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.441279 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.441323 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.441347 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:55 crc kubenswrapper[4758]: E0130 08:45:55.441491 4758 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:45:55 crc kubenswrapper[4758]: E0130 08:45:55.441528 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert podName:8cb0c6cc-e254-4dae-b433-397504fba6dc nodeName:}" failed. No retries permitted until 2026-01-30 08:45:56.441514165 +0000 UTC m=+961.413825716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" (UID: "8cb0c6cc-e254-4dae-b433-397504fba6dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.461168 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr"] Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.462029 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctj6h\" (UniqueName: \"kubernetes.io/projected/e4787454-8070-449e-a7d0-2ff179eaaff3-kube-api-access-ctj6h\") pod \"swift-operator-controller-manager-7d4f9d9c9b-t7dht\" (UID: \"e4787454-8070-449e-a7d0-2ff179eaaff3\") " pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.471987 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.647826 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.647900 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.647948 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6kdn\" (UniqueName: \"kubernetes.io/projected/0c09af13-f67b-4306-9039-f02d5f9e2f53-kube-api-access-v6kdn\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:55 crc kubenswrapper[4758]: E0130 08:45:55.683787 4758 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 08:45:55 crc kubenswrapper[4758]: E0130 08:45:55.683857 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:45:56.183835542 +0000 UTC m=+961.156147093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "webhook-server-cert" not found Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.693015 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.058980 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.081131 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.082181 4758 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.082383 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert podName:ef31968c-db2e-4083-a08f-19a8daf0ac2d nodeName:}" failed. No retries permitted until 2026-01-30 08:45:58.082367701 +0000 UTC m=+963.054679252 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert") pod "infra-operator-controller-manager-79955696d6-5k6df" (UID: "ef31968c-db2e-4083-a08f-19a8daf0ac2d") : secret "infra-operator-webhook-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.139308 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.139391 4758 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.140101 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:45:56.640079051 +0000 UTC m=+961.612390602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "metrics-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.564880 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.579191 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.568504 4758 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.579525 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert podName:8cb0c6cc-e254-4dae-b433-397504fba6dc nodeName:}" failed. No retries permitted until 2026-01-30 08:45:58.57950588 +0000 UTC m=+963.551817431 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" (UID: "8cb0c6cc-e254-4dae-b433-397504fba6dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.579628 4758 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.579659 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:45:57.579649854 +0000 UTC m=+962.551961405 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "webhook-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.595156 4758 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hvsqq container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.595217 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" podUID="f452c53b-893b-4060-b573-595e98576792" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.635219 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" podUID="10f9b3c9-c691-403e-801f-420bc2701a95" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.54:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.680287 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.682231 4758 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.682321 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:45:57.682302579 +0000 UTC m=+962.654614130 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "metrics-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.800063 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6kdn\" (UniqueName: \"kubernetes.io/projected/0c09af13-f67b-4306-9039-f02d5f9e2f53-kube-api-access-v6kdn\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.824172 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2"] Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.824848 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2"] Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.824861 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p"] Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.824869 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x"] Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.824933 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.837245 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-xbmrt" Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.844995 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq"] Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.892285 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhjx\" (UniqueName: \"kubernetes.io/projected/fbb26261-18aa-4ba0-940e-788200175600-kube-api-access-lqhjx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gt4n2\" (UID: \"fbb26261-18aa-4ba0-940e-788200175600\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.003084 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqhjx\" (UniqueName: \"kubernetes.io/projected/fbb26261-18aa-4ba0-940e-788200175600-kube-api-access-lqhjx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gt4n2\" (UID: \"fbb26261-18aa-4ba0-940e-788200175600\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" Jan 30 08:45:57 crc kubenswrapper[4758]: W0130 08:45:57.030445 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62b3bb0d_894a_4cb1_b644_d42f3cba98d7.slice/crio-0fdd92d57f884f0100f2b5de033300897f19a14dd12184eeb86260bd20480331 WatchSource:0}: Error finding container 0fdd92d57f884f0100f2b5de033300897f19a14dd12184eeb86260bd20480331: Status 404 returned error can't find the container with id 0fdd92d57f884f0100f2b5de033300897f19a14dd12184eeb86260bd20480331 Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.037973 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqhjx\" (UniqueName: \"kubernetes.io/projected/fbb26261-18aa-4ba0-940e-788200175600-kube-api-access-lqhjx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gt4n2\" (UID: \"fbb26261-18aa-4ba0-940e-788200175600\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.277355 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.359551 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-54985f5875-jct4c"] Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.617632 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:57 crc kubenswrapper[4758]: E0130 08:45:57.617845 4758 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 08:45:57 crc kubenswrapper[4758]: E0130 08:45:57.617892 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:45:59.617878364 +0000 UTC m=+964.590189915 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "webhook-server-cert" not found Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.623067 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn"] Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.718611 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:57 crc kubenswrapper[4758]: E0130 08:45:57.718909 4758 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 08:45:57 crc kubenswrapper[4758]: E0130 08:45:57.719337 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:45:59.719285211 +0000 UTC m=+964.691596762 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "metrics-server-cert" not found Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.825846 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" event={"ID":"62b3bb0d-894a-4cb1-b644-d42f3cba98d7","Type":"ContainerStarted","Data":"0fdd92d57f884f0100f2b5de033300897f19a14dd12184eeb86260bd20480331"} Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.837436 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" event={"ID":"ac7c91ce-d4d9-4754-9828-a43140218228","Type":"ContainerStarted","Data":"0961b1b947fb3f57791dda7650e0e677d8544b69d666f6956c8c438d2c9e0fe5"} Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.841242 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" event={"ID":"1c4d1258-0416-49d0-a3a5-6ece70dc0c46","Type":"ContainerStarted","Data":"6e29cd568f358c1fd1119de90180229fef6f1b348f9d696794fcea6a06816677"} Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.844802 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" event={"ID":"fe68673c-8979-46ee-a4aa-f95bcd7b4e8a","Type":"ContainerStarted","Data":"ba598062737afa2a71c6e3ac5e1c0a6f413a9d82323895c21031359b472b7202"} Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.855433 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" event={"ID":"9579fc9d-6eae-4249-ac43-35144ed58bed","Type":"ContainerStarted","Data":"6e886e0a878e5cf5c9123899d27c4ba3453be9aae15a093fadd90af716497064"} Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.126389 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:58 crc kubenswrapper[4758]: E0130 08:45:58.126542 4758 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 08:45:58 crc kubenswrapper[4758]: E0130 08:45:58.126591 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert podName:ef31968c-db2e-4083-a08f-19a8daf0ac2d nodeName:}" failed. No retries permitted until 2026-01-30 08:46:02.126576716 +0000 UTC m=+967.098888267 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert") pod "infra-operator-controller-manager-79955696d6-5k6df" (UID: "ef31968c-db2e-4083-a08f-19a8daf0ac2d") : secret "infra-operator-webhook-server-cert" not found Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.494504 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.539319 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj"] Jan 30 08:45:58 crc kubenswrapper[4758]: W0130 08:45:58.584191 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb271df00_e9f2_4c58_94e7_22ea4b7d7eaf.slice/crio-24b8c2d5844a051ecc937a4521ba5fdac9821637df6089a6a3e48e1e6f2463fb WatchSource:0}: Error finding container 24b8c2d5844a051ecc937a4521ba5fdac9821637df6089a6a3e48e1e6f2463fb: Status 404 returned error can't find the container with id 24b8c2d5844a051ecc937a4521ba5fdac9821637df6089a6a3e48e1e6f2463fb Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.587319 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:58 crc kubenswrapper[4758]: E0130 08:45:58.587582 4758 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:45:58 crc kubenswrapper[4758]: E0130 08:45:58.587658 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert podName:8cb0c6cc-e254-4dae-b433-397504fba6dc nodeName:}" failed. No retries permitted until 2026-01-30 08:46:02.587635466 +0000 UTC m=+967.559947017 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" (UID: "8cb0c6cc-e254-4dae-b433-397504fba6dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.588581 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.657859 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.694531 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.707979 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.710943 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.729219 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.736362 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l"] Jan 30 08:45:58 crc kubenswrapper[4758]: W0130 08:45:58.754927 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6d614d3_1ced_4b27_bd91_8edd410e5fc5.slice/crio-0e59006c6224e43589d1a9cbccfb6da71ba01b50be97ccbec9ee7dddf5b1e346 WatchSource:0}: Error finding container 0e59006c6224e43589d1a9cbccfb6da71ba01b50be97ccbec9ee7dddf5b1e346: Status 404 returned error can't find the container with id 0e59006c6224e43589d1a9cbccfb6da71ba01b50be97ccbec9ee7dddf5b1e346 Jan 30 08:45:58 crc kubenswrapper[4758]: W0130 08:45:58.766166 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03b807f9_cd53_4189_b7df_09c5ea5fdf53.slice/crio-98fbee49745f55defc4e9962f77c83ab6ada33ce8782c2a12098b627e2e49772 WatchSource:0}: Error finding container 98fbee49745f55defc4e9962f77c83ab6ada33ce8782c2a12098b627e2e49772: Status 404 returned error can't find the container with id 98fbee49745f55defc4e9962f77c83ab6ada33ce8782c2a12098b627e2e49772 Jan 30 08:45:58 crc kubenswrapper[4758]: W0130 08:45:58.783896 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b006f91_5b27_4342_935b_c7a7f174c03b.slice/crio-20344b8cd282e1e10fae1f7185ab6367587eccb84d8f66d736da8b9287695ec9 WatchSource:0}: Error finding container 20344b8cd282e1e10fae1f7185ab6367587eccb84d8f66d736da8b9287695ec9: Status 404 returned error can't find the container with id 20344b8cd282e1e10fae1f7185ab6367587eccb84d8f66d736da8b9287695ec9 Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.791358 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.800589 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.816122 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf"] Jan 30 08:45:58 crc kubenswrapper[4758]: W0130 08:45:58.841738 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2a4d0cd_ddb6_43d6_8f3e_457f519fb8c2.slice/crio-a21c68eccba47dc8a8e46d44068ca5ea2c0c1b4dcf970bc1523893bb59c68c57 WatchSource:0}: Error finding container a21c68eccba47dc8a8e46d44068ca5ea2c0c1b4dcf970bc1523893bb59c68c57: Status 404 returned error can't find the container with id a21c68eccba47dc8a8e46d44068ca5ea2c0c1b4dcf970bc1523893bb59c68c57 Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.855292 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k"] Jan 30 08:45:58 crc kubenswrapper[4758]: E0130 08:45:58.883208 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-grlxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-vzcpf_openstack-operators(178c1ff9-1a2a-4c4a-8258-89c267a5d0aa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 08:45:58 crc kubenswrapper[4758]: E0130 08:45:58.885256 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" podUID="178c1ff9-1a2a-4c4a-8258-89c267a5d0aa" Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.936714 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc"] Jan 30 08:45:58 crc kubenswrapper[4758]: W0130 08:45:58.950479 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0565da5c_02e0_409f_b801_e06c3e79ef47.slice/crio-0b2491d7ce6f5ae8195ae8c138666de0f0b41f8306b58f33a03eb5957ed08f5d WatchSource:0}: Error finding container 0b2491d7ce6f5ae8195ae8c138666de0f0b41f8306b58f33a03eb5957ed08f5d: Status 404 returned error can't find the container with id 0b2491d7ce6f5ae8195ae8c138666de0f0b41f8306b58f33a03eb5957ed08f5d Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.982767 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" event={"ID":"dc189df6-25bc-4d6e-aa30-05ce0db12721","Type":"ContainerStarted","Data":"c297ec26bf7669592da7b856a7716b8c62bee181f96f369221dff2619d7aa09a"} Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.988014 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.993916 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" event={"ID":"03b807f9-cd53-4189-b7df-09c5ea5fdf53","Type":"ContainerStarted","Data":"98fbee49745f55defc4e9962f77c83ab6ada33ce8782c2a12098b627e2e49772"} Jan 30 08:45:59 crc kubenswrapper[4758]: E0130 08:45:59.012738 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:8049d4d17f301838dfbc3740629d57f9b29c08e779affbf96c4197dc4d1fe19b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7cm6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5bf648c946-ngzmc_openstack-operators(73790ffa-61b1-489c-94c9-3934af94185f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 08:45:59 crc kubenswrapper[4758]: E0130 08:45:59.014102 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" podUID="73790ffa-61b1-489c-94c9-3934af94185f" Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.025173 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" event={"ID":"fbb26261-18aa-4ba0-940e-788200175600","Type":"ContainerStarted","Data":"f15d170dca7ce76872a233522052bacd75d776abb8543921bb4a8785471ae888"} Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.041265 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" event={"ID":"5c2a7d2b-62a1-468b-a3b3-fe77698a41a2","Type":"ContainerStarted","Data":"00d427a01faba891dda70adc27c7d61bd017b46b366975d104740e8f99ec8197"} Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.068343 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" event={"ID":"e4787454-8070-449e-a7d0-2ff179eaaff3","Type":"ContainerStarted","Data":"ce945b9e7cf19b98a6f520a1bcd09926918ae3a3d4194168165a841ada57e5c9"} Jan 30 08:45:59 crc kubenswrapper[4758]: E0130 08:45:59.068689 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v898l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-694c6dcf95-kdnzv_openstack-operators(a104527b-98dc-4120-91b5-6e7e9466b9a3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 08:45:59 crc kubenswrapper[4758]: E0130 08:45:59.069971 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" podUID="a104527b-98dc-4120-91b5-6e7e9466b9a3" Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.076205 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" event={"ID":"0b006f91-5b27-4342-935b-c7a7f174c03b","Type":"ContainerStarted","Data":"20344b8cd282e1e10fae1f7185ab6367587eccb84d8f66d736da8b9287695ec9"} Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.078508 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" event={"ID":"b271df00-e9f2-4c58-94e7-22ea4b7d7eaf","Type":"ContainerStarted","Data":"24b8c2d5844a051ecc937a4521ba5fdac9821637df6089a6a3e48e1e6f2463fb"} Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.079655 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" event={"ID":"fe039ec9-aaec-4e17-8eac-c7719245ba4d","Type":"ContainerStarted","Data":"461e979a4a31f53298e8d50966345d9d6102907eab8b154933f452defcb0ed55"} Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.082938 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" event={"ID":"b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7","Type":"ContainerStarted","Data":"b10de5108ad68e20062602ec056a480d36b3509d6405d60165cf1744982aa306"} Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.087249 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" event={"ID":"b6d614d3-1ced-4b27-bd91-8edd410e5fc5","Type":"ContainerStarted","Data":"0e59006c6224e43589d1a9cbccfb6da71ba01b50be97ccbec9ee7dddf5b1e346"} Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.089577 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" event={"ID":"b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2","Type":"ContainerStarted","Data":"a21c68eccba47dc8a8e46d44068ca5ea2c0c1b4dcf970bc1523893bb59c68c57"} Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.713387 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:59 crc kubenswrapper[4758]: E0130 08:45:59.713592 4758 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 08:45:59 crc kubenswrapper[4758]: E0130 08:45:59.713826 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:46:03.713806198 +0000 UTC m=+968.686117749 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "webhook-server-cert" not found Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.815920 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:59 crc kubenswrapper[4758]: E0130 08:45:59.817588 4758 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 08:45:59 crc kubenswrapper[4758]: E0130 08:45:59.817650 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:46:03.81762452 +0000 UTC m=+968.789936071 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "metrics-server-cert" not found Jan 30 08:46:00 crc kubenswrapper[4758]: I0130 08:46:00.140147 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" event={"ID":"a104527b-98dc-4120-91b5-6e7e9466b9a3","Type":"ContainerStarted","Data":"a9aeeb357d350139aa29e70aed36ea3fd890af5d2177621196ea22fa505d08c4"} Jan 30 08:46:00 crc kubenswrapper[4758]: E0130 08:46:00.147298 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" podUID="a104527b-98dc-4120-91b5-6e7e9466b9a3" Jan 30 08:46:00 crc kubenswrapper[4758]: I0130 08:46:00.156361 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" event={"ID":"0565da5c-02e0-409f-b801-e06c3e79ef47","Type":"ContainerStarted","Data":"0b2491d7ce6f5ae8195ae8c138666de0f0b41f8306b58f33a03eb5957ed08f5d"} Jan 30 08:46:00 crc kubenswrapper[4758]: I0130 08:46:00.172375 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" event={"ID":"178c1ff9-1a2a-4c4a-8258-89c267a5d0aa","Type":"ContainerStarted","Data":"ef30ab686332c6daf386b4a6edb9e0e5cef4a54bfc34ead8a2bd1cd5291aa835"} Jan 30 08:46:00 crc kubenswrapper[4758]: E0130 08:46:00.177604 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" podUID="178c1ff9-1a2a-4c4a-8258-89c267a5d0aa" Jan 30 08:46:00 crc kubenswrapper[4758]: I0130 08:46:00.180386 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" event={"ID":"73790ffa-61b1-489c-94c9-3934af94185f","Type":"ContainerStarted","Data":"b5abad725da64701824534cc8536be55dd296f9cb758cd1bab2bec06cb67aadf"} Jan 30 08:46:00 crc kubenswrapper[4758]: E0130 08:46:00.182894 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:8049d4d17f301838dfbc3740629d57f9b29c08e779affbf96c4197dc4d1fe19b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" podUID="73790ffa-61b1-489c-94c9-3934af94185f" Jan 30 08:46:01 crc kubenswrapper[4758]: E0130 08:46:01.197014 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" podUID="a104527b-98dc-4120-91b5-6e7e9466b9a3" Jan 30 08:46:01 crc kubenswrapper[4758]: E0130 08:46:01.227957 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:8049d4d17f301838dfbc3740629d57f9b29c08e779affbf96c4197dc4d1fe19b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" podUID="73790ffa-61b1-489c-94c9-3934af94185f" Jan 30 08:46:01 crc kubenswrapper[4758]: E0130 08:46:01.228862 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" podUID="178c1ff9-1a2a-4c4a-8258-89c267a5d0aa" Jan 30 08:46:02 crc kubenswrapper[4758]: I0130 08:46:02.185552 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:46:02 crc kubenswrapper[4758]: E0130 08:46:02.185796 4758 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 08:46:02 crc kubenswrapper[4758]: E0130 08:46:02.185852 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert podName:ef31968c-db2e-4083-a08f-19a8daf0ac2d nodeName:}" failed. No retries permitted until 2026-01-30 08:46:10.185832954 +0000 UTC m=+975.158144505 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert") pod "infra-operator-controller-manager-79955696d6-5k6df" (UID: "ef31968c-db2e-4083-a08f-19a8daf0ac2d") : secret "infra-operator-webhook-server-cert" not found Jan 30 08:46:02 crc kubenswrapper[4758]: I0130 08:46:02.591817 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:46:02 crc kubenswrapper[4758]: E0130 08:46:02.591960 4758 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:46:02 crc kubenswrapper[4758]: E0130 08:46:02.592006 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert podName:8cb0c6cc-e254-4dae-b433-397504fba6dc nodeName:}" failed. No retries permitted until 2026-01-30 08:46:10.591992795 +0000 UTC m=+975.564304346 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" (UID: "8cb0c6cc-e254-4dae-b433-397504fba6dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:46:03 crc kubenswrapper[4758]: I0130 08:46:03.715538 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:03 crc kubenswrapper[4758]: E0130 08:46:03.715709 4758 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 08:46:03 crc kubenswrapper[4758]: E0130 08:46:03.715920 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:46:11.715903546 +0000 UTC m=+976.688215097 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "webhook-server-cert" not found Jan 30 08:46:03 crc kubenswrapper[4758]: I0130 08:46:03.918593 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:03 crc kubenswrapper[4758]: E0130 08:46:03.918793 4758 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 08:46:03 crc kubenswrapper[4758]: E0130 08:46:03.918890 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:46:11.918868282 +0000 UTC m=+976.891179863 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "metrics-server-cert" not found Jan 30 08:46:10 crc kubenswrapper[4758]: I0130 08:46:10.231490 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:46:10 crc kubenswrapper[4758]: I0130 08:46:10.240174 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:46:10 crc kubenswrapper[4758]: I0130 08:46:10.340912 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-mxwhp" Jan 30 08:46:10 crc kubenswrapper[4758]: I0130 08:46:10.350171 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:46:10 crc kubenswrapper[4758]: I0130 08:46:10.638164 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:46:10 crc kubenswrapper[4758]: I0130 08:46:10.660020 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:46:10 crc kubenswrapper[4758]: I0130 08:46:10.961683 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-mj5td" Jan 30 08:46:10 crc kubenswrapper[4758]: I0130 08:46:10.970111 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:46:11 crc kubenswrapper[4758]: I0130 08:46:11.755981 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:11 crc kubenswrapper[4758]: I0130 08:46:11.759816 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:11 crc kubenswrapper[4758]: I0130 08:46:11.960388 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:11 crc kubenswrapper[4758]: I0130 08:46:11.963958 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:11 crc kubenswrapper[4758]: I0130 08:46:11.995164 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-rj47s" Jan 30 08:46:12 crc kubenswrapper[4758]: I0130 08:46:12.003923 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:14 crc kubenswrapper[4758]: E0130 08:46:14.404296 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 30 08:46:14 crc kubenswrapper[4758]: E0130 08:46:14.405078 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hlcl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-vjdn9_openstack-operators(b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:14 crc kubenswrapper[4758]: E0130 08:46:14.406307 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" podUID="b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2" Jan 30 08:46:15 crc kubenswrapper[4758]: E0130 08:46:15.297849 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" podUID="b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2" Jan 30 08:46:15 crc kubenswrapper[4758]: E0130 08:46:15.387844 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/barbican-operator@sha256:9dadfafaf8e84cd2ba5f076b2a64261f781f5c357678237202acb1bee2688d25" Jan 30 08:46:15 crc kubenswrapper[4758]: E0130 08:46:15.388074 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/barbican-operator@sha256:9dadfafaf8e84cd2ba5f076b2a64261f781f5c357678237202acb1bee2688d25,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kggjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-566c8844c5-6nj4p_openstack-operators(1c4d1258-0416-49d0-a3a5-6ece70dc0c46): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:15 crc kubenswrapper[4758]: E0130 08:46:15.389226 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" podUID="1c4d1258-0416-49d0-a3a5-6ece70dc0c46" Jan 30 08:46:15 crc kubenswrapper[4758]: E0130 08:46:15.959568 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/heat-operator@sha256:9f790ab2e5cc7137dd72c7b6232acb6c6646e421c597fa14c2389e8d76ff6f27" Jan 30 08:46:15 crc kubenswrapper[4758]: E0130 08:46:15.959738 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/heat-operator@sha256:9f790ab2e5cc7137dd72c7b6232acb6c6646e421c597fa14c2389e8d76ff6f27,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s97nc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-54985f5875-jct4c_openstack-operators(9579fc9d-6eae-4249-ac43-35144ed58bed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:15 crc kubenswrapper[4758]: E0130 08:46:15.960840 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" podUID="9579fc9d-6eae-4249-ac43-35144ed58bed" Jan 30 08:46:16 crc kubenswrapper[4758]: E0130 08:46:16.305243 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/heat-operator@sha256:9f790ab2e5cc7137dd72c7b6232acb6c6646e421c597fa14c2389e8d76ff6f27\\\"\"" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" podUID="9579fc9d-6eae-4249-ac43-35144ed58bed" Jan 30 08:46:16 crc kubenswrapper[4758]: E0130 08:46:16.305282 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/barbican-operator@sha256:9dadfafaf8e84cd2ba5f076b2a64261f781f5c357678237202acb1bee2688d25\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" podUID="1c4d1258-0416-49d0-a3a5-6ece70dc0c46" Jan 30 08:46:16 crc kubenswrapper[4758]: I0130 08:46:16.897437 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xjzxr"] Jan 30 08:46:16 crc kubenswrapper[4758]: I0130 08:46:16.899716 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:16 crc kubenswrapper[4758]: I0130 08:46:16.906152 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xjzxr"] Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.038936 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-utilities\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.038999 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-catalog-content\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.039133 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk9f4\" (UniqueName: \"kubernetes.io/projected/bcb47b30-3297-48ef-b045-92a3bfa3ade9-kube-api-access-bk9f4\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.140744 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk9f4\" (UniqueName: \"kubernetes.io/projected/bcb47b30-3297-48ef-b045-92a3bfa3ade9-kube-api-access-bk9f4\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.140829 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-utilities\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.140870 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-catalog-content\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.141583 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-catalog-content\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.141637 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-utilities\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.184577 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk9f4\" (UniqueName: \"kubernetes.io/projected/bcb47b30-3297-48ef-b045-92a3bfa3ade9-kube-api-access-bk9f4\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.223887 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:20 crc kubenswrapper[4758]: E0130 08:46:20.879672 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/neutron-operator@sha256:24a7033dccd09885beebba692a7951d5388284a36f285a97607971c10113354e" Jan 30 08:46:20 crc kubenswrapper[4758]: E0130 08:46:20.880204 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:24a7033dccd09885beebba692a7951d5388284a36f285a97607971c10113354e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qrlm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6cfc4f6754-7s6n2_openstack-operators(b6d614d3-1ced-4b27-bd91-8edd410e5fc5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:20 crc kubenswrapper[4758]: E0130 08:46:20.881508 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" podUID="b6d614d3-1ced-4b27-bd91-8edd410e5fc5" Jan 30 08:46:21 crc kubenswrapper[4758]: E0130 08:46:21.361842 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:24a7033dccd09885beebba692a7951d5388284a36f285a97607971c10113354e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" podUID="b6d614d3-1ced-4b27-bd91-8edd410e5fc5" Jan 30 08:46:21 crc kubenswrapper[4758]: E0130 08:46:21.743607 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/telemetry-operator@sha256:7316ef2da8e4d8df06b150058249eaed2aa4719491716a4422a8ee5d6a0c352f" Jan 30 08:46:21 crc kubenswrapper[4758]: E0130 08:46:21.743834 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:7316ef2da8e4d8df06b150058249eaed2aa4719491716a4422a8ee5d6a0c352f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8qkvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cd99594-xhszj_openstack-operators(b271df00-e9f2-4c58-94e7-22ea4b7d7eaf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:21 crc kubenswrapper[4758]: E0130 08:46:21.745652 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" podUID="b271df00-e9f2-4c58-94e7-22ea4b7d7eaf" Jan 30 08:46:22 crc kubenswrapper[4758]: E0130 08:46:22.361787 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:7316ef2da8e4d8df06b150058249eaed2aa4719491716a4422a8ee5d6a0c352f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" podUID="b271df00-e9f2-4c58-94e7-22ea4b7d7eaf" Jan 30 08:46:22 crc kubenswrapper[4758]: E0130 08:46:22.435453 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 30 08:46:22 crc kubenswrapper[4758]: E0130 08:46:22.435641 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6k9sw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-pmvv4_openstack-operators(fe039ec9-aaec-4e17-8eac-c7719245ba4d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:22 crc kubenswrapper[4758]: E0130 08:46:22.436811 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" podUID="fe039ec9-aaec-4e17-8eac-c7719245ba4d" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.591635 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q5zts"] Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.592979 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.612224 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5zts"] Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.736087 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-catalog-content\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.736147 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggxq2\" (UniqueName: \"kubernetes.io/projected/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-kube-api-access-ggxq2\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.736180 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-utilities\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.837102 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-catalog-content\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.837185 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggxq2\" (UniqueName: \"kubernetes.io/projected/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-kube-api-access-ggxq2\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.837214 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-utilities\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.837698 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-catalog-content\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.837849 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-utilities\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.872534 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggxq2\" (UniqueName: \"kubernetes.io/projected/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-kube-api-access-ggxq2\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.913359 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:23 crc kubenswrapper[4758]: E0130 08:46:23.369425 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" podUID="fe039ec9-aaec-4e17-8eac-c7719245ba4d" Jan 30 08:46:24 crc kubenswrapper[4758]: E0130 08:46:24.920613 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 30 08:46:24 crc kubenswrapper[4758]: E0130 08:46:24.920830 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lqhjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-gt4n2_openstack-operators(fbb26261-18aa-4ba0-940e-788200175600): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:24 crc kubenswrapper[4758]: E0130 08:46:24.921991 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" podUID="fbb26261-18aa-4ba0-940e-788200175600" Jan 30 08:46:25 crc kubenswrapper[4758]: E0130 08:46:25.391062 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" podUID="fbb26261-18aa-4ba0-940e-788200175600" Jan 30 08:46:26 crc kubenswrapper[4758]: E0130 08:46:26.199554 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:dc0be288bd4f98a1e80d21cb9e9a12381b33f9e4d6ecda0f46ca076587660144" Jan 30 08:46:26 crc kubenswrapper[4758]: E0130 08:46:26.200087 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:dc0be288bd4f98a1e80d21cb9e9a12381b33f9e4d6ecda0f46ca076587660144,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4tfdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-6c9d56f9bd-h89d9_openstack-operators(b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:26 crc kubenswrapper[4758]: E0130 08:46:26.201697 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" podUID="b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7" Jan 30 08:46:26 crc kubenswrapper[4758]: E0130 08:46:26.392824 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:dc0be288bd4f98a1e80d21cb9e9a12381b33f9e4d6ecda0f46ca076587660144\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" podUID="b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7" Jan 30 08:46:26 crc kubenswrapper[4758]: E0130 08:46:26.797768 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c" Jan 30 08:46:26 crc kubenswrapper[4758]: E0130 08:46:26.797944 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v898l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-694c6dcf95-kdnzv_openstack-operators(a104527b-98dc-4120-91b5-6e7e9466b9a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:26 crc kubenswrapper[4758]: E0130 08:46:26.799697 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" podUID="a104527b-98dc-4120-91b5-6e7e9466b9a3" Jan 30 08:46:27 crc kubenswrapper[4758]: E0130 08:46:27.464585 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:9d4490922c772ceca4b86d2a78e5cc6cd7198099dc637cc6c10428fc9c4e15fb" Jan 30 08:46:27 crc kubenswrapper[4758]: E0130 08:46:27.464773 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:9d4490922c772ceca4b86d2a78e5cc6cd7198099dc637cc6c10428fc9c4e15fb,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wsjts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-67f5956bc9-hp2mv_openstack-operators(0b006f91-5b27-4342-935b-c7a7f174c03b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:27 crc kubenswrapper[4758]: E0130 08:46:27.466079 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" podUID="0b006f91-5b27-4342-935b-c7a7f174c03b" Jan 30 08:46:28 crc kubenswrapper[4758]: E0130 08:46:28.407478 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:9d4490922c772ceca4b86d2a78e5cc6cd7198099dc637cc6c10428fc9c4e15fb\\\"\"" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" podUID="0b006f91-5b27-4342-935b-c7a7f174c03b" Jan 30 08:46:29 crc kubenswrapper[4758]: I0130 08:46:29.637489 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-5k6df"] Jan 30 08:46:29 crc kubenswrapper[4758]: I0130 08:46:29.667364 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr"] Jan 30 08:46:29 crc kubenswrapper[4758]: I0130 08:46:29.817284 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv"] Jan 30 08:46:29 crc kubenswrapper[4758]: I0130 08:46:29.858787 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5zts"] Jan 30 08:46:29 crc kubenswrapper[4758]: W0130 08:46:29.864886 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cb0c6cc_e254_4dae_b433_397504fba6dc.slice/crio-7bbcb0d9f086483ec9f437011820b6a78845de1fcd57eac9994dfa5a151dd525 WatchSource:0}: Error finding container 7bbcb0d9f086483ec9f437011820b6a78845de1fcd57eac9994dfa5a151dd525: Status 404 returned error can't find the container with id 7bbcb0d9f086483ec9f437011820b6a78845de1fcd57eac9994dfa5a151dd525 Jan 30 08:46:29 crc kubenswrapper[4758]: W0130 08:46:29.878943 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b8851e8_8f4f_4f19_9aa6_e4a05f90d2b1.slice/crio-51348cfa0c0e6ca90b1d98c0ff7786082c45a4de6c44a4f8954078597967a5f3 WatchSource:0}: Error finding container 51348cfa0c0e6ca90b1d98c0ff7786082c45a4de6c44a4f8954078597967a5f3: Status 404 returned error can't find the container with id 51348cfa0c0e6ca90b1d98c0ff7786082c45a4de6c44a4f8954078597967a5f3 Jan 30 08:46:29 crc kubenswrapper[4758]: I0130 08:46:29.883871 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xjzxr"] Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.425715 4758 generic.go:334] "Generic (PLEG): container finished" podID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerID="685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1" exitCode=0 Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.426979 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5zts" event={"ID":"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1","Type":"ContainerDied","Data":"685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.427105 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5zts" event={"ID":"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1","Type":"ContainerStarted","Data":"51348cfa0c0e6ca90b1d98c0ff7786082c45a4de6c44a4f8954078597967a5f3"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.428821 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" event={"ID":"ef31968c-db2e-4083-a08f-19a8daf0ac2d","Type":"ContainerStarted","Data":"e0c8a780eb16297074592133943f9c0d2461557b09ff532c6a9a07d7c1f80791"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.429823 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" event={"ID":"8cb0c6cc-e254-4dae-b433-397504fba6dc","Type":"ContainerStarted","Data":"7bbcb0d9f086483ec9f437011820b6a78845de1fcd57eac9994dfa5a151dd525"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.432583 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" event={"ID":"dc189df6-25bc-4d6e-aa30-05ce0db12721","Type":"ContainerStarted","Data":"a834dcccf839a2e389dff65725635cf90843ee6fde5d604219fae8033fb6fcd8"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.432739 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.462001 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" event={"ID":"9579fc9d-6eae-4249-ac43-35144ed58bed","Type":"ContainerStarted","Data":"3ac6bc03858c68cba296e5f5a016ca273cabf61e6dc36123c051d9e9705dda00"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.462788 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.481582 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" event={"ID":"fe68673c-8979-46ee-a4aa-f95bcd7b4e8a","Type":"ContainerStarted","Data":"6763991b335690dd7d87a5a123235b36641a52f2dd0986acf59ad0e1dcf7ee26"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.482184 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.501025 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" event={"ID":"62b3bb0d-894a-4cb1-b644-d42f3cba98d7","Type":"ContainerStarted","Data":"811708b02ebe609442569f06856770f1a7a24efd51c9c87c3ec32f1c033e0df8"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.501698 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.503235 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" event={"ID":"e4787454-8070-449e-a7d0-2ff179eaaff3","Type":"ContainerStarted","Data":"07c48a650cb7e17b28e28a01ce1a2403cdccc5d5afa7c3928dfe70599bfdf436"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.503920 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.512473 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzxr" event={"ID":"bcb47b30-3297-48ef-b045-92a3bfa3ade9","Type":"ContainerStarted","Data":"06b17b09dd9e50fb149bd1b02970b5fb7919445a9f9ff520c7633494ad9262cf"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.518307 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" event={"ID":"ac7c91ce-d4d9-4754-9828-a43140218228","Type":"ContainerStarted","Data":"a4dacef9d0c5d1ca65f297dd9aca2f8ded0efe7810aac8eaae55f99fd872713c"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.518883 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.520468 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" event={"ID":"73790ffa-61b1-489c-94c9-3934af94185f","Type":"ContainerStarted","Data":"060b6190b391d3558107a66066e926283193f48bbcdee5b03cf4f60d68be870d"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.520906 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.545360 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" event={"ID":"03b807f9-cd53-4189-b7df-09c5ea5fdf53","Type":"ContainerStarted","Data":"77711f65b64b85c6c3f9da2f552ec13b6a2bd98ebd9722cc6eda337d023d8e8a"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.545457 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.569322 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" event={"ID":"b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2","Type":"ContainerStarted","Data":"d0677b060d86da099904bc16c710292f1bae285b58ebeb597612e0e91771a31b"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.569598 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.575785 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" event={"ID":"178c1ff9-1a2a-4c4a-8258-89c267a5d0aa","Type":"ContainerStarted","Data":"4858a003abaa445853852b12a4beff866c838e8cdbab70ab9203f184e3dc584b"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.577450 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.605419 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" event={"ID":"5c2a7d2b-62a1-468b-a3b3-fe77698a41a2","Type":"ContainerStarted","Data":"5baa82b6bf9770e4058731014c86691d09ef445e4a23f53f3d51bf299176aed9"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.605821 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.612718 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" podStartSLOduration=8.46715709 podStartE2EDuration="37.612696714s" podCreationTimestamp="2026-01-30 08:45:53 +0000 UTC" firstStartedPulling="2026-01-30 08:45:56.940899059 +0000 UTC m=+961.913210600" lastFinishedPulling="2026-01-30 08:46:26.086438673 +0000 UTC m=+991.058750224" observedRunningTime="2026-01-30 08:46:30.609420771 +0000 UTC m=+995.581732332" watchObservedRunningTime="2026-01-30 08:46:30.612696714 +0000 UTC m=+995.585008265" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.625197 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" event={"ID":"0565da5c-02e0-409f-b801-e06c3e79ef47","Type":"ContainerStarted","Data":"2501cb20d7633502732aff0c05742718c8383f08b48477861506d3c1e3788b57"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.625838 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.642697 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" event={"ID":"0c09af13-f67b-4306-9039-f02d5f9e2f53","Type":"ContainerStarted","Data":"53cce0fe3356f21165a99bfb70cce6de3aa0a96d96e964a68c4f68d5dd64e058"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.642738 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" event={"ID":"0c09af13-f67b-4306-9039-f02d5f9e2f53","Type":"ContainerStarted","Data":"b00cce98f9f7816272307e7964dd7bc6f74020439f5711cd2931ab8befa13e1a"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.643460 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.674347 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" podStartSLOduration=9.724193386 podStartE2EDuration="37.674325336s" podCreationTimestamp="2026-01-30 08:45:53 +0000 UTC" firstStartedPulling="2026-01-30 08:45:57.044333829 +0000 UTC m=+962.016645380" lastFinishedPulling="2026-01-30 08:46:24.994465779 +0000 UTC m=+989.966777330" observedRunningTime="2026-01-30 08:46:30.669602598 +0000 UTC m=+995.641914149" watchObservedRunningTime="2026-01-30 08:46:30.674325336 +0000 UTC m=+995.646636887" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.700594 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" podStartSLOduration=5.88351199 podStartE2EDuration="37.700574469s" podCreationTimestamp="2026-01-30 08:45:53 +0000 UTC" firstStartedPulling="2026-01-30 08:45:57.548210359 +0000 UTC m=+962.520521910" lastFinishedPulling="2026-01-30 08:46:29.365272838 +0000 UTC m=+994.337584389" observedRunningTime="2026-01-30 08:46:30.696281784 +0000 UTC m=+995.668593355" watchObservedRunningTime="2026-01-30 08:46:30.700574469 +0000 UTC m=+995.672886020" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.731813 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" podStartSLOduration=10.295652887 podStartE2EDuration="37.731792658s" podCreationTimestamp="2026-01-30 08:45:53 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.651157849 +0000 UTC m=+963.623469400" lastFinishedPulling="2026-01-30 08:46:26.08729762 +0000 UTC m=+991.059609171" observedRunningTime="2026-01-30 08:46:30.730525197 +0000 UTC m=+995.702836748" watchObservedRunningTime="2026-01-30 08:46:30.731792658 +0000 UTC m=+995.704104209" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.834411 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" podStartSLOduration=8.208439844 podStartE2EDuration="36.834396843s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.786730971 +0000 UTC m=+963.759042522" lastFinishedPulling="2026-01-30 08:46:27.41268797 +0000 UTC m=+992.384999521" observedRunningTime="2026-01-30 08:46:30.763131589 +0000 UTC m=+995.735443150" watchObservedRunningTime="2026-01-30 08:46:30.834396843 +0000 UTC m=+995.806708384" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.836776 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" podStartSLOduration=8.422750347000001 podStartE2EDuration="36.836763057s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:57.671788793 +0000 UTC m=+962.644100344" lastFinishedPulling="2026-01-30 08:46:26.085801503 +0000 UTC m=+991.058113054" observedRunningTime="2026-01-30 08:46:30.832563896 +0000 UTC m=+995.804875457" watchObservedRunningTime="2026-01-30 08:46:30.836763057 +0000 UTC m=+995.809074608" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.881958 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" podStartSLOduration=7.643156377 podStartE2EDuration="36.881944273s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.787071282 +0000 UTC m=+963.759382833" lastFinishedPulling="2026-01-30 08:46:28.025859178 +0000 UTC m=+992.998170729" observedRunningTime="2026-01-30 08:46:30.880647492 +0000 UTC m=+995.852959033" watchObservedRunningTime="2026-01-30 08:46:30.881944273 +0000 UTC m=+995.854255824" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.948897 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" podStartSLOduration=7.232675782 podStartE2EDuration="37.948876131s" podCreationTimestamp="2026-01-30 08:45:53 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.858147042 +0000 UTC m=+963.830458593" lastFinishedPulling="2026-01-30 08:46:29.574347391 +0000 UTC m=+994.546658942" observedRunningTime="2026-01-30 08:46:30.926412487 +0000 UTC m=+995.898724038" watchObservedRunningTime="2026-01-30 08:46:30.948876131 +0000 UTC m=+995.921187692" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.956585 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" podStartSLOduration=7.925284555 podStartE2EDuration="36.956542681s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.993866429 +0000 UTC m=+963.966177980" lastFinishedPulling="2026-01-30 08:46:28.025124555 +0000 UTC m=+992.997436106" observedRunningTime="2026-01-30 08:46:30.947932451 +0000 UTC m=+995.920244002" watchObservedRunningTime="2026-01-30 08:46:30.956542681 +0000 UTC m=+995.928854232" Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.100478 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" podStartSLOduration=9.723822475 podStartE2EDuration="37.100455752s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.710471487 +0000 UTC m=+963.682783038" lastFinishedPulling="2026-01-30 08:46:26.087104764 +0000 UTC m=+991.059416315" observedRunningTime="2026-01-30 08:46:30.995067588 +0000 UTC m=+995.967379139" watchObservedRunningTime="2026-01-30 08:46:31.100455752 +0000 UTC m=+996.072767303" Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.201930 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" podStartSLOduration=6.986481922 podStartE2EDuration="37.201909842s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.883070447 +0000 UTC m=+963.855381998" lastFinishedPulling="2026-01-30 08:46:29.098498367 +0000 UTC m=+994.070809918" observedRunningTime="2026-01-30 08:46:31.184541777 +0000 UTC m=+996.156853348" watchObservedRunningTime="2026-01-30 08:46:31.201909842 +0000 UTC m=+996.174221403" Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.206486 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" podStartSLOduration=7.120478975 podStartE2EDuration="37.206460694s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:59.012564009 +0000 UTC m=+963.984875560" lastFinishedPulling="2026-01-30 08:46:29.098545728 +0000 UTC m=+994.070857279" observedRunningTime="2026-01-30 08:46:31.109827595 +0000 UTC m=+996.082139156" watchObservedRunningTime="2026-01-30 08:46:31.206460694 +0000 UTC m=+996.178772245" Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.331811 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" podStartSLOduration=36.331794132 podStartE2EDuration="36.331794132s" podCreationTimestamp="2026-01-30 08:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:46:31.330495722 +0000 UTC m=+996.302807273" watchObservedRunningTime="2026-01-30 08:46:31.331794132 +0000 UTC m=+996.304105683" Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.664627 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" event={"ID":"1c4d1258-0416-49d0-a3a5-6ece70dc0c46","Type":"ContainerStarted","Data":"20d409eb98ad29768d87a70ccbfb5df39d35cc82bca621033b8438a2c0078488"} Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.664848 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.699381 4758 generic.go:334] "Generic (PLEG): container finished" podID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerID="8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38" exitCode=0 Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.699467 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzxr" event={"ID":"bcb47b30-3297-48ef-b045-92a3bfa3ade9","Type":"ContainerDied","Data":"8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38"} Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.732464 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" podStartSLOduration=5.302909288 podStartE2EDuration="38.73244604s" podCreationTimestamp="2026-01-30 08:45:53 +0000 UTC" firstStartedPulling="2026-01-30 08:45:56.829837838 +0000 UTC m=+961.802149389" lastFinishedPulling="2026-01-30 08:46:30.25937459 +0000 UTC m=+995.231686141" observedRunningTime="2026-01-30 08:46:31.731407387 +0000 UTC m=+996.703718938" watchObservedRunningTime="2026-01-30 08:46:31.73244604 +0000 UTC m=+996.704757591" Jan 30 08:46:32 crc kubenswrapper[4758]: I0130 08:46:32.708690 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5zts" event={"ID":"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1","Type":"ContainerStarted","Data":"a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638"} Jan 30 08:46:33 crc kubenswrapper[4758]: I0130 08:46:33.716540 4758 generic.go:334] "Generic (PLEG): container finished" podID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerID="a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638" exitCode=0 Jan 30 08:46:33 crc kubenswrapper[4758]: I0130 08:46:33.717408 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5zts" event={"ID":"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1","Type":"ContainerDied","Data":"a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638"} Jan 30 08:46:34 crc kubenswrapper[4758]: I0130 08:46:34.174115 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" Jan 30 08:46:34 crc kubenswrapper[4758]: I0130 08:46:34.260568 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" Jan 30 08:46:34 crc kubenswrapper[4758]: I0130 08:46:34.328966 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" Jan 30 08:46:34 crc kubenswrapper[4758]: I0130 08:46:34.486275 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" Jan 30 08:46:34 crc kubenswrapper[4758]: I0130 08:46:34.527750 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" Jan 30 08:46:34 crc kubenswrapper[4758]: I0130 08:46:34.636018 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" Jan 30 08:46:34 crc kubenswrapper[4758]: I0130 08:46:34.680476 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" Jan 30 08:46:35 crc kubenswrapper[4758]: I0130 08:46:35.209892 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" Jan 30 08:46:35 crc kubenswrapper[4758]: I0130 08:46:35.382787 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" Jan 30 08:46:35 crc kubenswrapper[4758]: I0130 08:46:35.488282 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" Jan 30 08:46:36 crc kubenswrapper[4758]: I0130 08:46:36.094751 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" Jan 30 08:46:38 crc kubenswrapper[4758]: E0130 08:46:38.470472 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" podUID="a104527b-98dc-4120-91b5-6e7e9466b9a3" Jan 30 08:46:42 crc kubenswrapper[4758]: I0130 08:46:42.011549 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:43 crc kubenswrapper[4758]: I0130 08:46:43.797594 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzxr" event={"ID":"bcb47b30-3297-48ef-b045-92a3bfa3ade9","Type":"ContainerStarted","Data":"ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.196986 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.662661 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.805959 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" event={"ID":"b271df00-e9f2-4c58-94e7-22ea4b7d7eaf","Type":"ContainerStarted","Data":"300df300d3d68287507777e7475b9d04ada4f9d37a216a5c794236391b9a198d"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.807339 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" event={"ID":"fe039ec9-aaec-4e17-8eac-c7719245ba4d","Type":"ContainerStarted","Data":"be921b74bca7a395d47acbca6f0d1a333e21674335c6b5556b88a26021f4f862"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.808644 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" event={"ID":"b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7","Type":"ContainerStarted","Data":"0ae710e7e2e9623131973cc1ef57ade1ea4d1e3d16d7d20ca9c2b94318304b7a"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.809799 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" event={"ID":"b6d614d3-1ced-4b27-bd91-8edd410e5fc5","Type":"ContainerStarted","Data":"d6f55fb4a8361cb03deac0797568f331485c09144eeb3009e75206e95cd798d6"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.811007 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" event={"ID":"8cb0c6cc-e254-4dae-b433-397504fba6dc","Type":"ContainerStarted","Data":"8190e99eed5f03b70a61667ef7966e70f00d9f4efd86bd2d8001f6853d718716"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.812929 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" event={"ID":"fbb26261-18aa-4ba0-940e-788200175600","Type":"ContainerStarted","Data":"d6625a3b44a01f6adfe8410a8890e5a95dd31fc3f2ecd051eb91974e89c92f73"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.817435 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5zts" event={"ID":"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1","Type":"ContainerStarted","Data":"b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.818830 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" event={"ID":"0b006f91-5b27-4342-935b-c7a7f174c03b","Type":"ContainerStarted","Data":"2f4909e872d800e52e0b5bb2613b7dd70ad6a3c4fac96649091b99aade2ceefd"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.819046 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.820276 4758 generic.go:334] "Generic (PLEG): container finished" podID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerID="ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb" exitCode=0 Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.820340 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzxr" event={"ID":"bcb47b30-3297-48ef-b045-92a3bfa3ade9","Type":"ContainerDied","Data":"ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.821887 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" event={"ID":"ef31968c-db2e-4083-a08f-19a8daf0ac2d","Type":"ContainerStarted","Data":"58625f156738e4961025087556129785eae41d8c2585e63c9c53d762b719e359"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.898344 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" podStartSLOduration=4.604977741 podStartE2EDuration="48.898325444s" podCreationTimestamp="2026-01-30 08:45:56 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.858457052 +0000 UTC m=+963.830768603" lastFinishedPulling="2026-01-30 08:46:43.151804755 +0000 UTC m=+1008.124116306" observedRunningTime="2026-01-30 08:46:44.848659548 +0000 UTC m=+1009.820971099" watchObservedRunningTime="2026-01-30 08:46:44.898325444 +0000 UTC m=+1009.870636995" Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.964933 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" podStartSLOduration=6.205774497 podStartE2EDuration="50.96490931s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.795820377 +0000 UTC m=+963.768131938" lastFinishedPulling="2026-01-30 08:46:43.5549552 +0000 UTC m=+1008.527266751" observedRunningTime="2026-01-30 08:46:44.956754865 +0000 UTC m=+1009.929066416" watchObservedRunningTime="2026-01-30 08:46:44.96490931 +0000 UTC m=+1009.937221011" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.828987 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.829030 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.829062 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.848527 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" podStartSLOduration=7.060254964 podStartE2EDuration="51.848509195s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.769457866 +0000 UTC m=+963.741769417" lastFinishedPulling="2026-01-30 08:46:43.557712097 +0000 UTC m=+1008.530023648" observedRunningTime="2026-01-30 08:46:45.843012143 +0000 UTC m=+1010.815323694" watchObservedRunningTime="2026-01-30 08:46:45.848509195 +0000 UTC m=+1010.820820746" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.859999 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" podStartSLOduration=7.053920662 podStartE2EDuration="51.859979254s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.749815907 +0000 UTC m=+963.722127458" lastFinishedPulling="2026-01-30 08:46:43.555874499 +0000 UTC m=+1008.528186050" observedRunningTime="2026-01-30 08:46:45.857419044 +0000 UTC m=+1010.829730595" watchObservedRunningTime="2026-01-30 08:46:45.859979254 +0000 UTC m=+1010.832290805" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.877756 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" podStartSLOduration=7.254327334 podStartE2EDuration="51.87773875s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.529813194 +0000 UTC m=+963.502124745" lastFinishedPulling="2026-01-30 08:46:43.15322461 +0000 UTC m=+1008.125536161" observedRunningTime="2026-01-30 08:46:45.871075652 +0000 UTC m=+1010.843387233" watchObservedRunningTime="2026-01-30 08:46:45.87773875 +0000 UTC m=+1010.850050301" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.891347 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q5zts" podStartSLOduration=11.278445114 podStartE2EDuration="23.891325897s" podCreationTimestamp="2026-01-30 08:46:22 +0000 UTC" firstStartedPulling="2026-01-30 08:46:30.42782208 +0000 UTC m=+995.400133631" lastFinishedPulling="2026-01-30 08:46:43.040702863 +0000 UTC m=+1008.013014414" observedRunningTime="2026-01-30 08:46:45.88602343 +0000 UTC m=+1010.858334981" watchObservedRunningTime="2026-01-30 08:46:45.891325897 +0000 UTC m=+1010.863637448" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.903827 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" podStartSLOduration=6.946456128 podStartE2EDuration="51.903803938s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.595385501 +0000 UTC m=+963.567697052" lastFinishedPulling="2026-01-30 08:46:43.552733311 +0000 UTC m=+1008.525044862" observedRunningTime="2026-01-30 08:46:45.897749498 +0000 UTC m=+1010.870061049" watchObservedRunningTime="2026-01-30 08:46:45.903803938 +0000 UTC m=+1010.876115499" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.944714 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" podStartSLOduration=38.678650647 podStartE2EDuration="51.94469723s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:46:29.885429401 +0000 UTC m=+994.857740952" lastFinishedPulling="2026-01-30 08:46:43.151475984 +0000 UTC m=+1008.123787535" observedRunningTime="2026-01-30 08:46:45.941135148 +0000 UTC m=+1010.913446709" watchObservedRunningTime="2026-01-30 08:46:45.94469723 +0000 UTC m=+1010.917008771" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.987474 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" podStartSLOduration=39.509481863 podStartE2EDuration="52.987450679s" podCreationTimestamp="2026-01-30 08:45:53 +0000 UTC" firstStartedPulling="2026-01-30 08:46:29.665311091 +0000 UTC m=+994.637622642" lastFinishedPulling="2026-01-30 08:46:43.143279917 +0000 UTC m=+1008.115591458" observedRunningTime="2026-01-30 08:46:45.974930227 +0000 UTC m=+1010.947241778" watchObservedRunningTime="2026-01-30 08:46:45.987450679 +0000 UTC m=+1010.959762230" Jan 30 08:46:49 crc kubenswrapper[4758]: I0130 08:46:49.851822 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzxr" event={"ID":"bcb47b30-3297-48ef-b045-92a3bfa3ade9","Type":"ContainerStarted","Data":"3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f"} Jan 30 08:46:49 crc kubenswrapper[4758]: I0130 08:46:49.880760 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xjzxr" podStartSLOduration=16.298943186 podStartE2EDuration="33.880742283s" podCreationTimestamp="2026-01-30 08:46:16 +0000 UTC" firstStartedPulling="2026-01-30 08:46:31.703187353 +0000 UTC m=+996.675498914" lastFinishedPulling="2026-01-30 08:46:49.28498646 +0000 UTC m=+1014.257298011" observedRunningTime="2026-01-30 08:46:49.873485545 +0000 UTC m=+1014.845797136" watchObservedRunningTime="2026-01-30 08:46:49.880742283 +0000 UTC m=+1014.853053834" Jan 30 08:46:50 crc kubenswrapper[4758]: I0130 08:46:50.357196 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:46:50 crc kubenswrapper[4758]: I0130 08:46:50.970820 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:46:50 crc kubenswrapper[4758]: I0130 08:46:50.978924 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:46:51 crc kubenswrapper[4758]: I0130 08:46:51.770598 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:46:52 crc kubenswrapper[4758]: I0130 08:46:52.871181 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" event={"ID":"a104527b-98dc-4120-91b5-6e7e9466b9a3","Type":"ContainerStarted","Data":"36854ea4648a5d8015b7277aad0654738565315ee7dd99ca4357cc4ef760750b"} Jan 30 08:46:52 crc kubenswrapper[4758]: I0130 08:46:52.871748 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" Jan 30 08:46:52 crc kubenswrapper[4758]: I0130 08:46:52.892002 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" podStartSLOduration=5.505465937 podStartE2EDuration="58.891982401s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:59.068577504 +0000 UTC m=+964.040889055" lastFinishedPulling="2026-01-30 08:46:52.455093968 +0000 UTC m=+1017.427405519" observedRunningTime="2026-01-30 08:46:52.884979761 +0000 UTC m=+1017.857291322" watchObservedRunningTime="2026-01-30 08:46:52.891982401 +0000 UTC m=+1017.864293952" Jan 30 08:46:52 crc kubenswrapper[4758]: I0130 08:46:52.913777 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:52 crc kubenswrapper[4758]: I0130 08:46:52.913827 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:53 crc kubenswrapper[4758]: I0130 08:46:53.956183 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-q5zts" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="registry-server" probeResult="failure" output=< Jan 30 08:46:53 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 08:46:53 crc kubenswrapper[4758]: > Jan 30 08:46:54 crc kubenswrapper[4758]: I0130 08:46:54.611802 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" Jan 30 08:46:54 crc kubenswrapper[4758]: I0130 08:46:54.709435 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" Jan 30 08:46:54 crc kubenswrapper[4758]: I0130 08:46:54.811259 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" Jan 30 08:46:55 crc kubenswrapper[4758]: I0130 08:46:55.284714 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" Jan 30 08:46:55 crc kubenswrapper[4758]: I0130 08:46:55.287406 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" Jan 30 08:46:55 crc kubenswrapper[4758]: I0130 08:46:55.336104 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" Jan 30 08:46:55 crc kubenswrapper[4758]: I0130 08:46:55.339142 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" Jan 30 08:46:57 crc kubenswrapper[4758]: I0130 08:46:57.225081 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:57 crc kubenswrapper[4758]: I0130 08:46:57.226174 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:57 crc kubenswrapper[4758]: I0130 08:46:57.267738 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:57 crc kubenswrapper[4758]: I0130 08:46:57.967781 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:58 crc kubenswrapper[4758]: I0130 08:46:58.016004 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xjzxr"] Jan 30 08:46:59 crc kubenswrapper[4758]: I0130 08:46:59.914149 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xjzxr" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerName="registry-server" containerID="cri-o://3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f" gracePeriod=2 Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.279520 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.441995 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-catalog-content\") pod \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.442357 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-utilities\") pod \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.442486 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk9f4\" (UniqueName: \"kubernetes.io/projected/bcb47b30-3297-48ef-b045-92a3bfa3ade9-kube-api-access-bk9f4\") pod \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.443000 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-utilities" (OuterVolumeSpecName: "utilities") pod "bcb47b30-3297-48ef-b045-92a3bfa3ade9" (UID: "bcb47b30-3297-48ef-b045-92a3bfa3ade9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.449493 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb47b30-3297-48ef-b045-92a3bfa3ade9-kube-api-access-bk9f4" (OuterVolumeSpecName: "kube-api-access-bk9f4") pod "bcb47b30-3297-48ef-b045-92a3bfa3ade9" (UID: "bcb47b30-3297-48ef-b045-92a3bfa3ade9"). InnerVolumeSpecName "kube-api-access-bk9f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.491942 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcb47b30-3297-48ef-b045-92a3bfa3ade9" (UID: "bcb47b30-3297-48ef-b045-92a3bfa3ade9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.544282 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.544323 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.544334 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk9f4\" (UniqueName: \"kubernetes.io/projected/bcb47b30-3297-48ef-b045-92a3bfa3ade9-kube-api-access-bk9f4\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.922340 4758 generic.go:334] "Generic (PLEG): container finished" podID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerID="3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f" exitCode=0 Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.922388 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.922409 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzxr" event={"ID":"bcb47b30-3297-48ef-b045-92a3bfa3ade9","Type":"ContainerDied","Data":"3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f"} Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.923718 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzxr" event={"ID":"bcb47b30-3297-48ef-b045-92a3bfa3ade9","Type":"ContainerDied","Data":"06b17b09dd9e50fb149bd1b02970b5fb7919445a9f9ff520c7633494ad9262cf"} Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.923744 4758 scope.go:117] "RemoveContainer" containerID="3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.943040 4758 scope.go:117] "RemoveContainer" containerID="ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.957344 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xjzxr"] Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.969277 4758 scope.go:117] "RemoveContainer" containerID="8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.977293 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xjzxr"] Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.988281 4758 scope.go:117] "RemoveContainer" containerID="3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f" Jan 30 08:47:00 crc kubenswrapper[4758]: E0130 08:47:00.988993 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f\": container with ID starting with 3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f not found: ID does not exist" containerID="3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.989069 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f"} err="failed to get container status \"3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f\": rpc error: code = NotFound desc = could not find container \"3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f\": container with ID starting with 3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f not found: ID does not exist" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.989100 4758 scope.go:117] "RemoveContainer" containerID="ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb" Jan 30 08:47:00 crc kubenswrapper[4758]: E0130 08:47:00.989632 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb\": container with ID starting with ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb not found: ID does not exist" containerID="ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.989670 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb"} err="failed to get container status \"ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb\": rpc error: code = NotFound desc = could not find container \"ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb\": container with ID starting with ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb not found: ID does not exist" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.989698 4758 scope.go:117] "RemoveContainer" containerID="8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38" Jan 30 08:47:00 crc kubenswrapper[4758]: E0130 08:47:00.989979 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38\": container with ID starting with 8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38 not found: ID does not exist" containerID="8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.990008 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38"} err="failed to get container status \"8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38\": rpc error: code = NotFound desc = could not find container \"8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38\": container with ID starting with 8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38 not found: ID does not exist" Jan 30 08:47:01 crc kubenswrapper[4758]: I0130 08:47:01.778259 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" path="/var/lib/kubelet/pods/bcb47b30-3297-48ef-b045-92a3bfa3ade9/volumes" Jan 30 08:47:02 crc kubenswrapper[4758]: I0130 08:47:02.962123 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:47:03 crc kubenswrapper[4758]: I0130 08:47:02.999950 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:47:03 crc kubenswrapper[4758]: I0130 08:47:03.904579 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5zts"] Jan 30 08:47:04 crc kubenswrapper[4758]: I0130 08:47:04.948777 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q5zts" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="registry-server" containerID="cri-o://b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c" gracePeriod=2 Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.386153 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.510485 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-utilities\") pod \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.510534 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-catalog-content\") pod \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.510662 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggxq2\" (UniqueName: \"kubernetes.io/projected/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-kube-api-access-ggxq2\") pod \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.511519 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-utilities" (OuterVolumeSpecName: "utilities") pod "3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" (UID: "3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.516454 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-kube-api-access-ggxq2" (OuterVolumeSpecName: "kube-api-access-ggxq2") pod "3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" (UID: "3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1"). InnerVolumeSpecName "kube-api-access-ggxq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.539661 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" (UID: "3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.612476 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.612542 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.612553 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggxq2\" (UniqueName: \"kubernetes.io/projected/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-kube-api-access-ggxq2\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.697156 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.959907 4758 generic.go:334] "Generic (PLEG): container finished" podID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerID="b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c" exitCode=0 Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.960126 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5zts" event={"ID":"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1","Type":"ContainerDied","Data":"b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c"} Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.960546 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5zts" event={"ID":"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1","Type":"ContainerDied","Data":"51348cfa0c0e6ca90b1d98c0ff7786082c45a4de6c44a4f8954078597967a5f3"} Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.960638 4758 scope.go:117] "RemoveContainer" containerID="b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.960182 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.976980 4758 scope.go:117] "RemoveContainer" containerID="a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.984637 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5zts"] Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.991665 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5zts"] Jan 30 08:47:06 crc kubenswrapper[4758]: I0130 08:47:06.010282 4758 scope.go:117] "RemoveContainer" containerID="685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1" Jan 30 08:47:06 crc kubenswrapper[4758]: I0130 08:47:06.027305 4758 scope.go:117] "RemoveContainer" containerID="b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c" Jan 30 08:47:06 crc kubenswrapper[4758]: E0130 08:47:06.027735 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c\": container with ID starting with b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c not found: ID does not exist" containerID="b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c" Jan 30 08:47:06 crc kubenswrapper[4758]: I0130 08:47:06.027831 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c"} err="failed to get container status \"b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c\": rpc error: code = NotFound desc = could not find container \"b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c\": container with ID starting with b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c not found: ID does not exist" Jan 30 08:47:06 crc kubenswrapper[4758]: I0130 08:47:06.027942 4758 scope.go:117] "RemoveContainer" containerID="a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638" Jan 30 08:47:06 crc kubenswrapper[4758]: E0130 08:47:06.028227 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638\": container with ID starting with a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638 not found: ID does not exist" containerID="a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638" Jan 30 08:47:06 crc kubenswrapper[4758]: I0130 08:47:06.028320 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638"} err="failed to get container status \"a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638\": rpc error: code = NotFound desc = could not find container \"a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638\": container with ID starting with a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638 not found: ID does not exist" Jan 30 08:47:06 crc kubenswrapper[4758]: I0130 08:47:06.028402 4758 scope.go:117] "RemoveContainer" containerID="685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1" Jan 30 08:47:06 crc kubenswrapper[4758]: E0130 08:47:06.028661 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1\": container with ID starting with 685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1 not found: ID does not exist" containerID="685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1" Jan 30 08:47:06 crc kubenswrapper[4758]: I0130 08:47:06.028754 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1"} err="failed to get container status \"685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1\": rpc error: code = NotFound desc = could not find container \"685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1\": container with ID starting with 685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1 not found: ID does not exist" Jan 30 08:47:07 crc kubenswrapper[4758]: I0130 08:47:07.776321 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" path="/var/lib/kubelet/pods/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1/volumes" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.250641 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vbmk"] Jan 30 08:47:23 crc kubenswrapper[4758]: E0130 08:47:23.257175 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerName="extract-content" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.257216 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerName="extract-content" Jan 30 08:47:23 crc kubenswrapper[4758]: E0130 08:47:23.257232 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="extract-utilities" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.257240 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="extract-utilities" Jan 30 08:47:23 crc kubenswrapper[4758]: E0130 08:47:23.257250 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="registry-server" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.257257 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="registry-server" Jan 30 08:47:23 crc kubenswrapper[4758]: E0130 08:47:23.257279 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerName="registry-server" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.257287 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerName="registry-server" Jan 30 08:47:23 crc kubenswrapper[4758]: E0130 08:47:23.257299 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="extract-content" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.257306 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="extract-content" Jan 30 08:47:23 crc kubenswrapper[4758]: E0130 08:47:23.257322 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerName="extract-utilities" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.257330 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerName="extract-utilities" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.257572 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerName="registry-server" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.257589 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="registry-server" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.258537 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.262649 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.262685 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.262696 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.262878 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-6tkgt" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.278524 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vbmk"] Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.334175 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xxlqj"] Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.335246 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.339708 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.348515 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b978eb-b181-4644-890e-e990cd2ba269-config\") pod \"dnsmasq-dns-675f4bcbfc-8vbmk\" (UID: \"76b978eb-b181-4644-890e-e990cd2ba269\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.348559 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g8xh\" (UniqueName: \"kubernetes.io/projected/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-kube-api-access-8g8xh\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.348969 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-config\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.349030 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtj9x\" (UniqueName: \"kubernetes.io/projected/76b978eb-b181-4644-890e-e990cd2ba269-kube-api-access-mtj9x\") pod \"dnsmasq-dns-675f4bcbfc-8vbmk\" (UID: \"76b978eb-b181-4644-890e-e990cd2ba269\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.349120 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.356412 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xxlqj"] Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.450251 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.450321 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b978eb-b181-4644-890e-e990cd2ba269-config\") pod \"dnsmasq-dns-675f4bcbfc-8vbmk\" (UID: \"76b978eb-b181-4644-890e-e990cd2ba269\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.450352 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g8xh\" (UniqueName: \"kubernetes.io/projected/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-kube-api-access-8g8xh\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.450428 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-config\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.450463 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtj9x\" (UniqueName: \"kubernetes.io/projected/76b978eb-b181-4644-890e-e990cd2ba269-kube-api-access-mtj9x\") pod \"dnsmasq-dns-675f4bcbfc-8vbmk\" (UID: \"76b978eb-b181-4644-890e-e990cd2ba269\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.451418 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.451432 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-config\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.451491 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b978eb-b181-4644-890e-e990cd2ba269-config\") pod \"dnsmasq-dns-675f4bcbfc-8vbmk\" (UID: \"76b978eb-b181-4644-890e-e990cd2ba269\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.469917 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g8xh\" (UniqueName: \"kubernetes.io/projected/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-kube-api-access-8g8xh\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.483898 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtj9x\" (UniqueName: \"kubernetes.io/projected/76b978eb-b181-4644-890e-e990cd2ba269-kube-api-access-mtj9x\") pod \"dnsmasq-dns-675f4bcbfc-8vbmk\" (UID: \"76b978eb-b181-4644-890e-e990cd2ba269\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.574465 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.649769 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:24 crc kubenswrapper[4758]: I0130 08:47:24.032326 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vbmk"] Jan 30 08:47:24 crc kubenswrapper[4758]: I0130 08:47:24.075151 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" event={"ID":"76b978eb-b181-4644-890e-e990cd2ba269","Type":"ContainerStarted","Data":"665d5ba8bc1f090d8cef06dca81382042d1531fae5dcc1915e1bee30b5b90286"} Jan 30 08:47:24 crc kubenswrapper[4758]: I0130 08:47:24.119797 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xxlqj"] Jan 30 08:47:24 crc kubenswrapper[4758]: W0130 08:47:24.125729 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod282c7a61_ccff_46d9_bd1b_e3d40f7e9992.slice/crio-bac8612bddb729c12a6a05a6538fbeafbeb08dc4fa53c2af3518421001d9d89e WatchSource:0}: Error finding container bac8612bddb729c12a6a05a6538fbeafbeb08dc4fa53c2af3518421001d9d89e: Status 404 returned error can't find the container with id bac8612bddb729c12a6a05a6538fbeafbeb08dc4fa53c2af3518421001d9d89e Jan 30 08:47:25 crc kubenswrapper[4758]: I0130 08:47:25.085844 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" event={"ID":"282c7a61-ccff-46d9-bd1b-e3d40f7e9992","Type":"ContainerStarted","Data":"bac8612bddb729c12a6a05a6538fbeafbeb08dc4fa53c2af3518421001d9d89e"} Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.179219 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vbmk"] Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.230158 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lfzl4"] Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.231291 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.244882 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lfzl4"] Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.303937 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-config\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.304137 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-dns-svc\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.304271 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsz2x\" (UniqueName: \"kubernetes.io/projected/e2df4666-d5a3-4445-81a2-6cf85fd83b33-kube-api-access-gsz2x\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.406126 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-config\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.406191 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-dns-svc\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.406326 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsz2x\" (UniqueName: \"kubernetes.io/projected/e2df4666-d5a3-4445-81a2-6cf85fd83b33-kube-api-access-gsz2x\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.407095 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-dns-svc\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.407669 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-config\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.432493 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsz2x\" (UniqueName: \"kubernetes.io/projected/e2df4666-d5a3-4445-81a2-6cf85fd83b33-kube-api-access-gsz2x\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.567362 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.602875 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xxlqj"] Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.652226 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zllc"] Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.653841 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.662632 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zllc"] Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.811626 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfwpj\" (UniqueName: \"kubernetes.io/projected/06ace92c-6051-496d-add5-845fcd6b184f-kube-api-access-jfwpj\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.811692 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.811774 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-config\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.916019 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-config\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.916592 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfwpj\" (UniqueName: \"kubernetes.io/projected/06ace92c-6051-496d-add5-845fcd6b184f-kube-api-access-jfwpj\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.916628 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.917613 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.922334 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-config\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.958704 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfwpj\" (UniqueName: \"kubernetes.io/projected/06ace92c-6051-496d-add5-845fcd6b184f-kube-api-access-jfwpj\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.070981 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.086336 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lfzl4"] Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.439279 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.440706 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.447142 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.447358 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.447638 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.447894 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.448163 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-8zbw4" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.448367 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.448529 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.455579 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631554 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631621 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631641 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631661 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631692 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631711 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-config-data\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631730 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631748 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631773 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-pod-info\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631792 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-server-conf\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631813 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p297\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-kube-api-access-7p297\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.656136 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zllc"] Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.732930 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.732989 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-config-data\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733008 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733064 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733118 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-pod-info\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733144 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-server-conf\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733175 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p297\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-kube-api-access-7p297\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733240 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733310 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733363 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733387 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.734699 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.736323 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.736504 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.736591 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.737206 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-config-data\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.738557 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-server-conf\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.765221 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.766284 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.767249 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-pod-info\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.791064 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.877567 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p297\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-kube-api-access-7p297\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.883743 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.900154 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.901442 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.911471 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.911740 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.921417 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.921748 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nwnvg" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.921864 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.922699 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.929529 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.946265 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.093760 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.093825 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.093865 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.093905 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.093923 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.093961 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.094009 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.094031 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.094063 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xslvz\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-kube-api-access-xslvz\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.094081 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.094104 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.101347 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.149419 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" event={"ID":"e2df4666-d5a3-4445-81a2-6cf85fd83b33","Type":"ContainerStarted","Data":"b96b4fc83550ef190db47ae38eb6bf89d769a168b8ca5a2463f2d0e77a15c934"} Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.154942 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" event={"ID":"06ace92c-6051-496d-add5-845fcd6b184f","Type":"ContainerStarted","Data":"c4dbfaf7b50d7784d2c38e9516b227a07cd7a4d0e1609065c924221cceeafd2c"} Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.195788 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.195844 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.195896 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.195922 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.195979 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.196000 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.196202 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.196243 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.196589 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.196593 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.197012 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.197093 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.197124 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xslvz\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-kube-api-access-xslvz\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.197151 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.204502 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.206437 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.206576 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.207838 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.208590 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.209412 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.217225 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.223470 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.226820 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xslvz\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-kube-api-access-xslvz\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.254556 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.823091 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.077891 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.146813 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.147881 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.156960 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-6rqbl" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.157084 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.157189 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.161138 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.167747 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.182144 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.252767 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89ff2fc5-609f-4ca7-b997-9f8adfa5a221","Type":"ContainerStarted","Data":"8f4809dc1b13b9e08c7692a160b88862bacf3c8f0bf775e5a671d7ecfeb7b0f7"} Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.257767 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5","Type":"ContainerStarted","Data":"2885eef36f4c15f0a02d92953e9ac8c27214898712fb29c24b72fc2ee76d019d"} Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.333493 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-config-data-default\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.333548 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.333580 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-kolla-config\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.333610 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.333639 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.333657 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.333689 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.333736 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86ll5\" (UniqueName: \"kubernetes.io/projected/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-kube-api-access-86ll5\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434524 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86ll5\" (UniqueName: \"kubernetes.io/projected/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-kube-api-access-86ll5\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434583 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-config-data-default\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434611 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434636 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-kolla-config\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434658 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434682 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434699 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434726 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434931 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.435107 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.435690 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-kolla-config\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.436494 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-config-data-default\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.440179 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.465790 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.469263 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.473616 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86ll5\" (UniqueName: \"kubernetes.io/projected/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-kube-api-access-86ll5\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.493233 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.783106 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.286449 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.287805 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.291370 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-p4fh8" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.291576 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.291878 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.291995 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.300503 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.394940 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.397178 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwnhs\" (UniqueName: \"kubernetes.io/projected/0a15517a-ff48-40d1-91b4-442bfef91fc1-kube-api-access-bwnhs\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.397239 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.397279 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0a15517a-ff48-40d1-91b4-442bfef91fc1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.397326 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a15517a-ff48-40d1-91b4-442bfef91fc1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.397365 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.397393 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.397417 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a15517a-ff48-40d1-91b4-442bfef91fc1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.499263 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.500272 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506150 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwnhs\" (UniqueName: \"kubernetes.io/projected/0a15517a-ff48-40d1-91b4-442bfef91fc1-kube-api-access-bwnhs\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506208 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506234 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0a15517a-ff48-40d1-91b4-442bfef91fc1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506261 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a15517a-ff48-40d1-91b4-442bfef91fc1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506288 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506310 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506334 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a15517a-ff48-40d1-91b4-442bfef91fc1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506368 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506853 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.507871 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.508437 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.509140 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.512764 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.513227 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.513342 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0a15517a-ff48-40d1-91b4-442bfef91fc1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.515018 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.530563 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kdb57" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.535129 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a15517a-ff48-40d1-91b4-442bfef91fc1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.579551 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a15517a-ff48-40d1-91b4-442bfef91fc1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.580340 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwnhs\" (UniqueName: \"kubernetes.io/projected/0a15517a-ff48-40d1-91b4-442bfef91fc1-kube-api-access-bwnhs\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.598900 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.617853 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.617923 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.617992 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vh24\" (UniqueName: \"kubernetes.io/projected/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-kube-api-access-8vh24\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.618012 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-kolla-config\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.618063 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-config-data\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.638674 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.718956 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.719003 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.719055 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vh24\" (UniqueName: \"kubernetes.io/projected/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-kube-api-access-8vh24\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.719079 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-kolla-config\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.719103 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-config-data\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.720016 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-config-data\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.721270 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-kolla-config\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.747098 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.748073 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.790708 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vh24\" (UniqueName: \"kubernetes.io/projected/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-kube-api-access-8vh24\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.836393 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 08:47:31 crc kubenswrapper[4758]: I0130 08:47:31.517740 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 08:47:31 crc kubenswrapper[4758]: I0130 08:47:31.723666 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 08:47:31 crc kubenswrapper[4758]: I0130 08:47:31.896577 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.405369 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.415023 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.418963 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.451125 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-4pd9v" Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.478207 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c952\" (UniqueName: \"kubernetes.io/projected/cbb296cf-3469-43c3-9ebe-8fd1d31c00a6-kube-api-access-5c952\") pod \"kube-state-metrics-0\" (UID: \"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6\") " pod="openstack/kube-state-metrics-0" Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.577857 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0a15517a-ff48-40d1-91b4-442bfef91fc1","Type":"ContainerStarted","Data":"cf475cc5d360c9c29e47bdbfb01dfa1f891f176fcaa0100edc0b6c1bb6283d95"} Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.579528 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d","Type":"ContainerStarted","Data":"08841b6a57727c7be4ea9e79802b75e9976c6628cfcd6f40426cb3244f93caf9"} Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.581278 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f","Type":"ContainerStarted","Data":"ef4d61440694ddae18cf4ca34416964685192c06cbabddeff999f222b3a367fd"} Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.585149 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c952\" (UniqueName: \"kubernetes.io/projected/cbb296cf-3469-43c3-9ebe-8fd1d31c00a6-kube-api-access-5c952\") pod \"kube-state-metrics-0\" (UID: \"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6\") " pod="openstack/kube-state-metrics-0" Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.613196 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c952\" (UniqueName: \"kubernetes.io/projected/cbb296cf-3469-43c3-9ebe-8fd1d31c00a6-kube-api-access-5c952\") pod \"kube-state-metrics-0\" (UID: \"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6\") " pod="openstack/kube-state-metrics-0" Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.789655 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:47:33 crc kubenswrapper[4758]: I0130 08:47:33.746499 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:47:33 crc kubenswrapper[4758]: W0130 08:47:33.758367 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbb296cf_3469_43c3_9ebe_8fd1d31c00a6.slice/crio-d0e1a0f857fe6a3aca4ee7d7173a8d5441df3a299b48556a5a5526d0a92355f7 WatchSource:0}: Error finding container d0e1a0f857fe6a3aca4ee7d7173a8d5441df3a299b48556a5a5526d0a92355f7: Status 404 returned error can't find the container with id d0e1a0f857fe6a3aca4ee7d7173a8d5441df3a299b48556a5a5526d0a92355f7 Jan 30 08:47:34 crc kubenswrapper[4758]: I0130 08:47:34.621030 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6","Type":"ContainerStarted","Data":"d0e1a0f857fe6a3aca4ee7d7173a8d5441df3a299b48556a5a5526d0a92355f7"} Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.102458 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bg2b8"] Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.103893 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.116134 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.116173 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.116466 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pcbvq" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.133337 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bg2b8"] Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.191199 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78294966-2fbd-4ed5-8d2a-2096ac07dac1-scripts\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.191262 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78294966-2fbd-4ed5-8d2a-2096ac07dac1-combined-ca-bundle\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.191298 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd2p4\" (UniqueName: \"kubernetes.io/projected/78294966-2fbd-4ed5-8d2a-2096ac07dac1-kube-api-access-kd2p4\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.191325 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-run\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.191346 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-run-ovn\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.191379 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/78294966-2fbd-4ed5-8d2a-2096ac07dac1-ovn-controller-tls-certs\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.191420 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-log-ovn\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.212106 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-9jzfn"] Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.217148 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.237004 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9jzfn"] Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294266 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-log\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294317 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzkm7\" (UniqueName: \"kubernetes.io/projected/86f64629-c944-4783-8012-7cea45690009-kube-api-access-wzkm7\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294356 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78294966-2fbd-4ed5-8d2a-2096ac07dac1-scripts\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294384 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-etc-ovs\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294403 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78294966-2fbd-4ed5-8d2a-2096ac07dac1-combined-ca-bundle\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294431 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd2p4\" (UniqueName: \"kubernetes.io/projected/78294966-2fbd-4ed5-8d2a-2096ac07dac1-kube-api-access-kd2p4\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294449 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-run\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294464 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-run-ovn\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294487 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/78294966-2fbd-4ed5-8d2a-2096ac07dac1-ovn-controller-tls-certs\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294555 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86f64629-c944-4783-8012-7cea45690009-scripts\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294575 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-log-ovn\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294605 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-lib\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294631 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-run\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.297253 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78294966-2fbd-4ed5-8d2a-2096ac07dac1-scripts\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.300238 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-run\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.300478 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-log-ovn\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.300781 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-run-ovn\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.317451 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/78294966-2fbd-4ed5-8d2a-2096ac07dac1-ovn-controller-tls-certs\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.319953 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd2p4\" (UniqueName: \"kubernetes.io/projected/78294966-2fbd-4ed5-8d2a-2096ac07dac1-kube-api-access-kd2p4\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.322604 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78294966-2fbd-4ed5-8d2a-2096ac07dac1-combined-ca-bundle\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.396000 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86f64629-c944-4783-8012-7cea45690009-scripts\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.396121 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-lib\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.396160 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-run\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.396202 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-log\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.396250 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzkm7\" (UniqueName: \"kubernetes.io/projected/86f64629-c944-4783-8012-7cea45690009-kube-api-access-wzkm7\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.396285 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-etc-ovs\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.396681 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-etc-ovs\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.397730 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-lib\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.397816 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-log\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.398618 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-run\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.398983 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86f64629-c944-4783-8012-7cea45690009-scripts\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.419099 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzkm7\" (UniqueName: \"kubernetes.io/projected/86f64629-c944-4783-8012-7cea45690009-kube-api-access-wzkm7\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.446832 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.539609 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.925426 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.926838 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.928861 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.929498 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.930361 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.932424 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.937192 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-788gr" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.942199 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.020125 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7ba2509-bffc-4639-9b6f-188e2a194b7a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.020197 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.020245 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.020290 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ba2509-bffc-4639-9b6f-188e2a194b7a-config\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.020328 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a7ba2509-bffc-4639-9b6f-188e2a194b7a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.020347 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.020413 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.020450 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2hpk\" (UniqueName: \"kubernetes.io/projected/a7ba2509-bffc-4639-9b6f-188e2a194b7a-kube-api-access-h2hpk\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.121683 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7ba2509-bffc-4639-9b6f-188e2a194b7a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.121741 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.121772 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.121800 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ba2509-bffc-4639-9b6f-188e2a194b7a-config\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.121822 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a7ba2509-bffc-4639-9b6f-188e2a194b7a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.124119 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.124408 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7ba2509-bffc-4639-9b6f-188e2a194b7a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.124550 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.124765 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.125375 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ba2509-bffc-4639-9b6f-188e2a194b7a-config\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.125642 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a7ba2509-bffc-4639-9b6f-188e2a194b7a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.126659 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2hpk\" (UniqueName: \"kubernetes.io/projected/a7ba2509-bffc-4639-9b6f-188e2a194b7a-kube-api-access-h2hpk\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.135227 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.135907 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.143324 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.165762 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2hpk\" (UniqueName: \"kubernetes.io/projected/a7ba2509-bffc-4639-9b6f-188e2a194b7a-kube-api-access-h2hpk\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.206861 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.267707 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.080022 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.085926 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.089109 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.089906 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-575xs" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.089990 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.093720 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.096722 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.205856 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.205900 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5cns\" (UniqueName: \"kubernetes.io/projected/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-kube-api-access-c5cns\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.205955 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.205997 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.206076 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.206127 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.206145 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.206187 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-config\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.307583 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.307631 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5cns\" (UniqueName: \"kubernetes.io/projected/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-kube-api-access-c5cns\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.307679 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.307717 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.307746 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.307792 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.308761 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.308826 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-config\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.308983 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.309137 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.312693 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.314287 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-config\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.315931 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.316855 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.317702 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.323916 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5cns\" (UniqueName: \"kubernetes.io/projected/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-kube-api-access-c5cns\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.333098 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.423367 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:48 crc kubenswrapper[4758]: E0130 08:47:48.969628 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 30 08:47:48 crc kubenswrapper[4758]: E0130 08:47:48.970359 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n677hddh589h9dh5ddh5d6h56ch65chf5h5cdh666h54ch79h66fhb4h68h677h677h5fh647h5h76h67ch56chch554hcbh64bh64fh597h685h69q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8vh24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:48 crc kubenswrapper[4758]: E0130 08:47:48.971537 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d" Jan 30 08:47:49 crc kubenswrapper[4758]: E0130 08:47:49.780699 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d" Jan 30 08:47:52 crc kubenswrapper[4758]: I0130 08:47:52.390476 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:47:52 crc kubenswrapper[4758]: I0130 08:47:52.390773 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:47:53 crc kubenswrapper[4758]: E0130 08:47:53.866402 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 08:47:53 crc kubenswrapper[4758]: E0130 08:47:53.866858 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7p297,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(89ff2fc5-609f-4ca7-b997-9f8adfa5a221): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:53 crc kubenswrapper[4758]: E0130 08:47:53.868909 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" Jan 30 08:47:53 crc kubenswrapper[4758]: E0130 08:47:53.873778 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 08:47:53 crc kubenswrapper[4758]: E0130 08:47:53.874604 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xslvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:53 crc kubenswrapper[4758]: E0130 08:47:53.875751 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" Jan 30 08:47:54 crc kubenswrapper[4758]: E0130 08:47:54.819373 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" Jan 30 08:47:54 crc kubenswrapper[4758]: E0130 08:47:54.819493 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" Jan 30 08:47:55 crc kubenswrapper[4758]: E0130 08:47:55.521697 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 30 08:47:55 crc kubenswrapper[4758]: E0130 08:47:55.522284 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86ll5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(1787d8b1-5b19-41e5-a66d-8375f9d5bb3f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:55 crc kubenswrapper[4758]: E0130 08:47:55.523873 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="1787d8b1-5b19-41e5-a66d-8375f9d5bb3f" Jan 30 08:47:55 crc kubenswrapper[4758]: E0130 08:47:55.531489 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 30 08:47:55 crc kubenswrapper[4758]: E0130 08:47:55.531615 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwnhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(0a15517a-ff48-40d1-91b4-442bfef91fc1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:55 crc kubenswrapper[4758]: E0130 08:47:55.533674 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="0a15517a-ff48-40d1-91b4-442bfef91fc1" Jan 30 08:47:55 crc kubenswrapper[4758]: E0130 08:47:55.825705 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="1787d8b1-5b19-41e5-a66d-8375f9d5bb3f" Jan 30 08:47:55 crc kubenswrapper[4758]: E0130 08:47:55.826151 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="0a15517a-ff48-40d1-91b4-442bfef91fc1" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.420776 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.420953 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mtj9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-8vbmk_openstack(76b978eb-b181-4644-890e-e990cd2ba269): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.422555 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" podUID="76b978eb-b181-4644-890e-e990cd2ba269" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.444437 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.444712 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8g8xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-xxlqj_openstack(282c7a61-ccff-46d9-bd1b-e3d40f7e9992): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.446656 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" podUID="282c7a61-ccff-46d9-bd1b-e3d40f7e9992" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.550104 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.550679 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jfwpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-8zllc_openstack(06ace92c-6051-496d-add5-845fcd6b184f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.552151 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" podUID="06ace92c-6051-496d-add5-845fcd6b184f" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.558785 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.558970 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gsz2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-lfzl4_openstack(e2df4666-d5a3-4445-81a2-6cf85fd83b33): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.561305 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" podUID="e2df4666-d5a3-4445-81a2-6cf85fd83b33" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.834307 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" podUID="e2df4666-d5a3-4445-81a2-6cf85fd83b33" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.834291 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" podUID="06ace92c-6051-496d-add5-845fcd6b184f" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.046285 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bg2b8"] Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.405029 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9jzfn"] Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.438104 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.445077 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:57 crc kubenswrapper[4758]: W0130 08:47:57.467624 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86f64629_c944_4783_8012_7cea45690009.slice/crio-ae5cc7df5861ea98dce86f28d863ed841d1bb3b107c69a4743f6cc235e40a851 WatchSource:0}: Error finding container ae5cc7df5861ea98dce86f28d863ed841d1bb3b107c69a4743f6cc235e40a851: Status 404 returned error can't find the container with id ae5cc7df5861ea98dce86f28d863ed841d1bb3b107c69a4743f6cc235e40a851 Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.504128 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-dns-svc\") pod \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.504201 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-config\") pod \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.504238 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtj9x\" (UniqueName: \"kubernetes.io/projected/76b978eb-b181-4644-890e-e990cd2ba269-kube-api-access-mtj9x\") pod \"76b978eb-b181-4644-890e-e990cd2ba269\" (UID: \"76b978eb-b181-4644-890e-e990cd2ba269\") " Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.504276 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g8xh\" (UniqueName: \"kubernetes.io/projected/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-kube-api-access-8g8xh\") pod \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.504332 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b978eb-b181-4644-890e-e990cd2ba269-config\") pod \"76b978eb-b181-4644-890e-e990cd2ba269\" (UID: \"76b978eb-b181-4644-890e-e990cd2ba269\") " Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.505473 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76b978eb-b181-4644-890e-e990cd2ba269-config" (OuterVolumeSpecName: "config") pod "76b978eb-b181-4644-890e-e990cd2ba269" (UID: "76b978eb-b181-4644-890e-e990cd2ba269"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.506088 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-config" (OuterVolumeSpecName: "config") pod "282c7a61-ccff-46d9-bd1b-e3d40f7e9992" (UID: "282c7a61-ccff-46d9-bd1b-e3d40f7e9992"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.506356 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "282c7a61-ccff-46d9-bd1b-e3d40f7e9992" (UID: "282c7a61-ccff-46d9-bd1b-e3d40f7e9992"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.519688 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76b978eb-b181-4644-890e-e990cd2ba269-kube-api-access-mtj9x" (OuterVolumeSpecName: "kube-api-access-mtj9x") pod "76b978eb-b181-4644-890e-e990cd2ba269" (UID: "76b978eb-b181-4644-890e-e990cd2ba269"). InnerVolumeSpecName "kube-api-access-mtj9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.524422 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-kube-api-access-8g8xh" (OuterVolumeSpecName: "kube-api-access-8g8xh") pod "282c7a61-ccff-46d9-bd1b-e3d40f7e9992" (UID: "282c7a61-ccff-46d9-bd1b-e3d40f7e9992"). InnerVolumeSpecName "kube-api-access-8g8xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.606620 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.606667 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtj9x\" (UniqueName: \"kubernetes.io/projected/76b978eb-b181-4644-890e-e990cd2ba269-kube-api-access-mtj9x\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.606684 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g8xh\" (UniqueName: \"kubernetes.io/projected/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-kube-api-access-8g8xh\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.606696 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b978eb-b181-4644-890e-e990cd2ba269-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.606708 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.837058 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bg2b8" event={"ID":"78294966-2fbd-4ed5-8d2a-2096ac07dac1","Type":"ContainerStarted","Data":"9f71368381ed4b23545b49a874f239adb7294957710f4d88131fb382aab84ece"} Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.838467 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9jzfn" event={"ID":"86f64629-c944-4783-8012-7cea45690009","Type":"ContainerStarted","Data":"ae5cc7df5861ea98dce86f28d863ed841d1bb3b107c69a4743f6cc235e40a851"} Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.839379 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" event={"ID":"76b978eb-b181-4644-890e-e990cd2ba269","Type":"ContainerDied","Data":"665d5ba8bc1f090d8cef06dca81382042d1531fae5dcc1915e1bee30b5b90286"} Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.839441 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.843553 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" event={"ID":"282c7a61-ccff-46d9-bd1b-e3d40f7e9992","Type":"ContainerDied","Data":"bac8612bddb729c12a6a05a6538fbeafbeb08dc4fa53c2af3518421001d9d89e"} Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.843577 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.889953 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vbmk"] Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.902289 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vbmk"] Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.932138 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xxlqj"] Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.937363 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xxlqj"] Jan 30 08:47:58 crc kubenswrapper[4758]: I0130 08:47:58.247846 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 08:47:58 crc kubenswrapper[4758]: I0130 08:47:58.439463 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 08:47:58 crc kubenswrapper[4758]: W0130 08:47:58.448169 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7ba2509_bffc_4639_9b6f_188e2a194b7a.slice/crio-4d542364110c2a274586b606b41287e7767e3f8545622ca4bba3cac4544ed388 WatchSource:0}: Error finding container 4d542364110c2a274586b606b41287e7767e3f8545622ca4bba3cac4544ed388: Status 404 returned error can't find the container with id 4d542364110c2a274586b606b41287e7767e3f8545622ca4bba3cac4544ed388 Jan 30 08:47:58 crc kubenswrapper[4758]: I0130 08:47:58.855527 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0817ec2c-1c6d-4c1b-a019-3f2579ade18a","Type":"ContainerStarted","Data":"206886b49e89606fe3abf172131b8e9c8cba5f0c3ba7a514b2afbfb3ab939a26"} Jan 30 08:47:58 crc kubenswrapper[4758]: I0130 08:47:58.857818 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6","Type":"ContainerStarted","Data":"a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9"} Jan 30 08:47:58 crc kubenswrapper[4758]: I0130 08:47:58.857913 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 08:47:58 crc kubenswrapper[4758]: I0130 08:47:58.859115 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a7ba2509-bffc-4639-9b6f-188e2a194b7a","Type":"ContainerStarted","Data":"4d542364110c2a274586b606b41287e7767e3f8545622ca4bba3cac4544ed388"} Jan 30 08:47:58 crc kubenswrapper[4758]: I0130 08:47:58.888163 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.250555009 podStartE2EDuration="26.888142346s" podCreationTimestamp="2026-01-30 08:47:32 +0000 UTC" firstStartedPulling="2026-01-30 08:47:33.768335443 +0000 UTC m=+1058.740646994" lastFinishedPulling="2026-01-30 08:47:58.40592278 +0000 UTC m=+1083.378234331" observedRunningTime="2026-01-30 08:47:58.874187356 +0000 UTC m=+1083.846498917" watchObservedRunningTime="2026-01-30 08:47:58.888142346 +0000 UTC m=+1083.860453897" Jan 30 08:47:59 crc kubenswrapper[4758]: I0130 08:47:59.779237 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="282c7a61-ccff-46d9-bd1b-e3d40f7e9992" path="/var/lib/kubelet/pods/282c7a61-ccff-46d9-bd1b-e3d40f7e9992/volumes" Jan 30 08:47:59 crc kubenswrapper[4758]: I0130 08:47:59.779669 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76b978eb-b181-4644-890e-e990cd2ba269" path="/var/lib/kubelet/pods/76b978eb-b181-4644-890e-e990cd2ba269/volumes" Jan 30 08:48:01 crc kubenswrapper[4758]: I0130 08:48:01.882926 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bg2b8" event={"ID":"78294966-2fbd-4ed5-8d2a-2096ac07dac1","Type":"ContainerStarted","Data":"e5b1ce3ad16e899bd9ca345d751692237d36dd98405551005503ec69d2c0c7cb"} Jan 30 08:48:01 crc kubenswrapper[4758]: I0130 08:48:01.883643 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-bg2b8" Jan 30 08:48:01 crc kubenswrapper[4758]: I0130 08:48:01.885243 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9jzfn" event={"ID":"86f64629-c944-4783-8012-7cea45690009","Type":"ContainerStarted","Data":"d08ff3faa6719a19927863b031f6ed229b91aca2a658aa331153a7b4ed10cb35"} Jan 30 08:48:01 crc kubenswrapper[4758]: I0130 08:48:01.888766 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a7ba2509-bffc-4639-9b6f-188e2a194b7a","Type":"ContainerStarted","Data":"b02b095b561ac3970e9b8d3e786005a428c4cf9573cd13fb9c8eec04b187a1a6"} Jan 30 08:48:01 crc kubenswrapper[4758]: I0130 08:48:01.890568 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0817ec2c-1c6d-4c1b-a019-3f2579ade18a","Type":"ContainerStarted","Data":"e2f5c88a801381ae07b3b7f5b0f543088d4149d91c5a2302cdbc42ba81fe9e9f"} Jan 30 08:48:01 crc kubenswrapper[4758]: I0130 08:48:01.917435 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-bg2b8" podStartSLOduration=21.860111956 podStartE2EDuration="25.917412378s" podCreationTimestamp="2026-01-30 08:47:36 +0000 UTC" firstStartedPulling="2026-01-30 08:47:57.356210725 +0000 UTC m=+1082.328522276" lastFinishedPulling="2026-01-30 08:48:01.413511147 +0000 UTC m=+1086.385822698" observedRunningTime="2026-01-30 08:48:01.912758441 +0000 UTC m=+1086.885070002" watchObservedRunningTime="2026-01-30 08:48:01.917412378 +0000 UTC m=+1086.889723929" Jan 30 08:48:02 crc kubenswrapper[4758]: I0130 08:48:02.899307 4758 generic.go:334] "Generic (PLEG): container finished" podID="86f64629-c944-4783-8012-7cea45690009" containerID="d08ff3faa6719a19927863b031f6ed229b91aca2a658aa331153a7b4ed10cb35" exitCode=0 Jan 30 08:48:02 crc kubenswrapper[4758]: I0130 08:48:02.899466 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9jzfn" event={"ID":"86f64629-c944-4783-8012-7cea45690009","Type":"ContainerDied","Data":"d08ff3faa6719a19927863b031f6ed229b91aca2a658aa331153a7b4ed10cb35"} Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.907914 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d","Type":"ContainerStarted","Data":"2a192c887f67c306619f4adb3e16026c0a668af211f43de5f60491116763b3ef"} Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.909589 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.913697 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9jzfn" event={"ID":"86f64629-c944-4783-8012-7cea45690009","Type":"ContainerStarted","Data":"70b33f1d69b885a2b151eefc51689f2f71208f0f536efba3c9620a5ebc490934"} Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.913774 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9jzfn" event={"ID":"86f64629-c944-4783-8012-7cea45690009","Type":"ContainerStarted","Data":"59136f4228296109ffff66c5201c6b06b3163ec703744d2272130da3661b1543"} Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.914002 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.916286 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a7ba2509-bffc-4639-9b6f-188e2a194b7a","Type":"ContainerStarted","Data":"dd83aa514cfbb28fd096b1875ce9ca027292debf48424631561636ab97986280"} Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.919566 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0817ec2c-1c6d-4c1b-a019-3f2579ade18a","Type":"ContainerStarted","Data":"a0ef2f157905418976164fe4e2da020208b7f06114c84436d5486800378f5aa5"} Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.936705 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.639829342 podStartE2EDuration="33.936687377s" podCreationTimestamp="2026-01-30 08:47:30 +0000 UTC" firstStartedPulling="2026-01-30 08:47:31.948157386 +0000 UTC m=+1056.920468937" lastFinishedPulling="2026-01-30 08:48:03.245015421 +0000 UTC m=+1088.217326972" observedRunningTime="2026-01-30 08:48:03.931446532 +0000 UTC m=+1088.903758103" watchObservedRunningTime="2026-01-30 08:48:03.936687377 +0000 UTC m=+1088.908998928" Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.958297 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-9jzfn" podStartSLOduration=24.055215684 podStartE2EDuration="27.958277368s" podCreationTimestamp="2026-01-30 08:47:36 +0000 UTC" firstStartedPulling="2026-01-30 08:47:57.474267351 +0000 UTC m=+1082.446578902" lastFinishedPulling="2026-01-30 08:48:01.377329035 +0000 UTC m=+1086.349640586" observedRunningTime="2026-01-30 08:48:03.954887601 +0000 UTC m=+1088.927199172" watchObservedRunningTime="2026-01-30 08:48:03.958277368 +0000 UTC m=+1088.930588929" Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.980574 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=24.29041172 podStartE2EDuration="28.980555481s" podCreationTimestamp="2026-01-30 08:47:35 +0000 UTC" firstStartedPulling="2026-01-30 08:47:58.450045352 +0000 UTC m=+1083.422356903" lastFinishedPulling="2026-01-30 08:48:03.140189113 +0000 UTC m=+1088.112500664" observedRunningTime="2026-01-30 08:48:03.978401193 +0000 UTC m=+1088.950712754" watchObservedRunningTime="2026-01-30 08:48:03.980555481 +0000 UTC m=+1088.952867032" Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.999349 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=20.26313155 podStartE2EDuration="24.999327594s" podCreationTimestamp="2026-01-30 08:47:39 +0000 UTC" firstStartedPulling="2026-01-30 08:47:58.396766471 +0000 UTC m=+1083.369078022" lastFinishedPulling="2026-01-30 08:48:03.132962515 +0000 UTC m=+1088.105274066" observedRunningTime="2026-01-30 08:48:03.994443669 +0000 UTC m=+1088.966755250" watchObservedRunningTime="2026-01-30 08:48:03.999327594 +0000 UTC m=+1088.971639145" Jan 30 08:48:04 crc kubenswrapper[4758]: I0130 08:48:04.269090 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 08:48:04 crc kubenswrapper[4758]: I0130 08:48:04.305599 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 08:48:04 crc kubenswrapper[4758]: I0130 08:48:04.424066 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 08:48:04 crc kubenswrapper[4758]: I0130 08:48:04.458702 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 08:48:04 crc kubenswrapper[4758]: I0130 08:48:04.926211 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 08:48:04 crc kubenswrapper[4758]: I0130 08:48:04.926905 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 08:48:04 crc kubenswrapper[4758]: I0130 08:48:04.926929 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:48:06 crc kubenswrapper[4758]: I0130 08:48:06.974708 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.280713 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zllc"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.347431 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-nbwlv"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.348904 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.355762 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.361940 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-nbwlv"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.372791 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.465803 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-w9zqv"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.472629 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.478253 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-w9zqv"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.478357 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.500961 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skkbl\" (UniqueName: \"kubernetes.io/projected/c1232df3-7af1-411f-9116-519761089007-kube-api-access-skkbl\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.501082 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.501158 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-config\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.501174 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606067 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl2lm\" (UniqueName: \"kubernetes.io/projected/1c89386d-e6bb-45b3-bd95-970270275127-kube-api-access-xl2lm\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606125 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c89386d-e6bb-45b3-bd95-970270275127-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606171 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c89386d-e6bb-45b3-bd95-970270275127-config\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606198 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skkbl\" (UniqueName: \"kubernetes.io/projected/c1232df3-7af1-411f-9116-519761089007-kube-api-access-skkbl\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606371 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1c89386d-e6bb-45b3-bd95-970270275127-ovn-rundir\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606445 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606505 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c89386d-e6bb-45b3-bd95-970270275127-combined-ca-bundle\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606536 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606585 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-config\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606642 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1c89386d-e6bb-45b3-bd95-970270275127-ovs-rundir\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.607318 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.608054 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-config\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.608346 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.639143 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skkbl\" (UniqueName: \"kubernetes.io/projected/c1232df3-7af1-411f-9116-519761089007-kube-api-access-skkbl\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.695622 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.704238 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.710631 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c89386d-e6bb-45b3-bd95-970270275127-config\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.711381 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c89386d-e6bb-45b3-bd95-970270275127-config\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.711515 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1c89386d-e6bb-45b3-bd95-970270275127-ovn-rundir\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.711800 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1c89386d-e6bb-45b3-bd95-970270275127-ovn-rundir\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.711917 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c89386d-e6bb-45b3-bd95-970270275127-combined-ca-bundle\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.712643 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1c89386d-e6bb-45b3-bd95-970270275127-ovs-rundir\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.712715 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl2lm\" (UniqueName: \"kubernetes.io/projected/1c89386d-e6bb-45b3-bd95-970270275127-kube-api-access-xl2lm\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.712763 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c89386d-e6bb-45b3-bd95-970270275127-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.712873 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1c89386d-e6bb-45b3-bd95-970270275127-ovs-rundir\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.715496 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c89386d-e6bb-45b3-bd95-970270275127-combined-ca-bundle\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.722716 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c89386d-e6bb-45b3-bd95-970270275127-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.735316 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl2lm\" (UniqueName: \"kubernetes.io/projected/1c89386d-e6bb-45b3-bd95-970270275127-kube-api-access-xl2lm\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.813892 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.814503 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-config\") pod \"06ace92c-6051-496d-add5-845fcd6b184f\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.814752 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfwpj\" (UniqueName: \"kubernetes.io/projected/06ace92c-6051-496d-add5-845fcd6b184f-kube-api-access-jfwpj\") pod \"06ace92c-6051-496d-add5-845fcd6b184f\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.814774 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-dns-svc\") pod \"06ace92c-6051-496d-add5-845fcd6b184f\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.818467 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "06ace92c-6051-496d-add5-845fcd6b184f" (UID: "06ace92c-6051-496d-add5-845fcd6b184f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.818962 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-config" (OuterVolumeSpecName: "config") pod "06ace92c-6051-496d-add5-845fcd6b184f" (UID: "06ace92c-6051-496d-add5-845fcd6b184f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.821816 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ace92c-6051-496d-add5-845fcd6b184f-kube-api-access-jfwpj" (OuterVolumeSpecName: "kube-api-access-jfwpj") pod "06ace92c-6051-496d-add5-845fcd6b184f" (UID: "06ace92c-6051-496d-add5-845fcd6b184f"). InnerVolumeSpecName "kube-api-access-jfwpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.823331 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.823362 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfwpj\" (UniqueName: \"kubernetes.io/projected/06ace92c-6051-496d-add5-845fcd6b184f-kube-api-access-jfwpj\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.823372 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.934817 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lfzl4"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.954441 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-fz6p5"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.955932 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.962729 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.977659 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fz6p5"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.985311 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.986816 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.993687 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" event={"ID":"06ace92c-6051-496d-add5-845fcd6b184f","Type":"ContainerDied","Data":"c4dbfaf7b50d7784d2c38e9516b227a07cd7a4d0e1609065c924221cceeafd2c"} Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.993815 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:07.997798 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:07.997958 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:07.998217 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:07.998398 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-g55gk" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.025643 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f","Type":"ContainerStarted","Data":"e1f2f27ac70e2a7b40c793c7484fab4935efde62267c026bab00e4506643346f"} Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.031406 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.031464 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-dns-svc\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.031491 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmzbs\" (UniqueName: \"kubernetes.io/projected/41049534-cd80-47a3-b923-be969750a8b9-kube-api-access-nmzbs\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.041448 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.041508 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-config\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.109270 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.142902 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-scripts\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.142975 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143025 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-dns-svc\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143067 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmzbs\" (UniqueName: \"kubernetes.io/projected/41049534-cd80-47a3-b923-be969750a8b9-kube-api-access-nmzbs\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143191 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143222 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143279 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143296 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143316 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143335 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-config\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143352 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn59q\" (UniqueName: \"kubernetes.io/projected/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-kube-api-access-bn59q\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143380 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-config\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.144896 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.145884 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-dns-svc\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.149507 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-config\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.175926 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.178114 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zllc"] Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.202547 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmzbs\" (UniqueName: \"kubernetes.io/projected/41049534-cd80-47a3-b923-be969750a8b9-kube-api-access-nmzbs\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.203314 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zllc"] Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.245177 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-scripts\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.245290 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.245325 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.245359 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.245396 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.245426 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn59q\" (UniqueName: \"kubernetes.io/projected/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-kube-api-access-bn59q\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.245447 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-config\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.246257 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-config\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.246814 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-scripts\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.247570 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.253684 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.265129 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.268710 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.280684 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn59q\" (UniqueName: \"kubernetes.io/projected/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-kube-api-access-bn59q\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.329366 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.342381 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.544881 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-nbwlv"] Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.740120 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-w9zqv"] Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.006177 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.035551 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" event={"ID":"c1232df3-7af1-411f-9116-519761089007","Type":"ContainerStarted","Data":"82d07fd150728b7c82f069c87ee62a0a60661daf49e51d27a49bbc40f9ce2ee9"} Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.041688 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-w9zqv" event={"ID":"1c89386d-e6bb-45b3-bd95-970270275127","Type":"ContainerStarted","Data":"d1fc649238fe21b9dcbf8bcbb9d3c9fa7d9810f19f47d42798530f205349c409"} Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.058150 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" event={"ID":"e2df4666-d5a3-4445-81a2-6cf85fd83b33","Type":"ContainerDied","Data":"b96b4fc83550ef190db47ae38eb6bf89d769a168b8ca5a2463f2d0e77a15c934"} Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.058249 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.060136 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0a15517a-ff48-40d1-91b4-442bfef91fc1","Type":"ContainerStarted","Data":"a329555e5c83c8e76ae3384416532d1a40534ec90749967ee7e67ead47128aa7"} Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.061304 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fz6p5"] Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.063054 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-dns-svc\") pod \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.063116 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsz2x\" (UniqueName: \"kubernetes.io/projected/e2df4666-d5a3-4445-81a2-6cf85fd83b33-kube-api-access-gsz2x\") pod \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.063175 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-config\") pod \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.063868 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e2df4666-d5a3-4445-81a2-6cf85fd83b33" (UID: "e2df4666-d5a3-4445-81a2-6cf85fd83b33"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.064345 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-config" (OuterVolumeSpecName: "config") pod "e2df4666-d5a3-4445-81a2-6cf85fd83b33" (UID: "e2df4666-d5a3-4445-81a2-6cf85fd83b33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.064795 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.064815 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.075992 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2df4666-d5a3-4445-81a2-6cf85fd83b33-kube-api-access-gsz2x" (OuterVolumeSpecName: "kube-api-access-gsz2x") pod "e2df4666-d5a3-4445-81a2-6cf85fd83b33" (UID: "e2df4666-d5a3-4445-81a2-6cf85fd83b33"). InnerVolumeSpecName "kube-api-access-gsz2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.166278 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsz2x\" (UniqueName: \"kubernetes.io/projected/e2df4666-d5a3-4445-81a2-6cf85fd83b33-kube-api-access-gsz2x\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.172949 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 08:48:09 crc kubenswrapper[4758]: W0130 08:48:09.184465 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod831717ab_1273_408c_9fdf_4cd5bd2d2bb9.slice/crio-d0c95453112924b07209d07b55cab47a407ba6020f621580ef7559e63021ec9d WatchSource:0}: Error finding container d0c95453112924b07209d07b55cab47a407ba6020f621580ef7559e63021ec9d: Status 404 returned error can't find the container with id d0c95453112924b07209d07b55cab47a407ba6020f621580ef7559e63021ec9d Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.632000 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lfzl4"] Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.639489 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lfzl4"] Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.793604 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06ace92c-6051-496d-add5-845fcd6b184f" path="/var/lib/kubelet/pods/06ace92c-6051-496d-add5-845fcd6b184f/volumes" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.794320 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2df4666-d5a3-4445-81a2-6cf85fd83b33" path="/var/lib/kubelet/pods/e2df4666-d5a3-4445-81a2-6cf85fd83b33/volumes" Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.068462 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"831717ab-1273-408c-9fdf-4cd5bd2d2bb9","Type":"ContainerStarted","Data":"d0c95453112924b07209d07b55cab47a407ba6020f621580ef7559e63021ec9d"} Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.071151 4758 generic.go:334] "Generic (PLEG): container finished" podID="41049534-cd80-47a3-b923-be969750a8b9" containerID="b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105" exitCode=0 Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.071213 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fz6p5" event={"ID":"41049534-cd80-47a3-b923-be969750a8b9","Type":"ContainerDied","Data":"b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105"} Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.072027 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fz6p5" event={"ID":"41049534-cd80-47a3-b923-be969750a8b9","Type":"ContainerStarted","Data":"31513955cec6f870378becf0d67edc5a8694ab95bcb31c44777622788df86a4b"} Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.078543 4758 generic.go:334] "Generic (PLEG): container finished" podID="c1232df3-7af1-411f-9116-519761089007" containerID="b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c" exitCode=0 Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.078598 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" event={"ID":"c1232df3-7af1-411f-9116-519761089007","Type":"ContainerDied","Data":"b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c"} Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.080815 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-w9zqv" event={"ID":"1c89386d-e6bb-45b3-bd95-970270275127","Type":"ContainerStarted","Data":"00d40b09d9c85c1ba861c38b21bc369c538037b627c22d2aa32f28f7a604e474"} Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.121716 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-w9zqv" podStartSLOduration=3.12169539 podStartE2EDuration="3.12169539s" podCreationTimestamp="2026-01-30 08:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:48:10.118672094 +0000 UTC m=+1095.090983645" watchObservedRunningTime="2026-01-30 08:48:10.12169539 +0000 UTC m=+1095.094006941" Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.838220 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.093803 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"831717ab-1273-408c-9fdf-4cd5bd2d2bb9","Type":"ContainerStarted","Data":"b6d71226bb5391ed4462b44777e5829f5e784ffb42e69da0cacedd758dabe5c9"} Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.096027 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fz6p5" event={"ID":"41049534-cd80-47a3-b923-be969750a8b9","Type":"ContainerStarted","Data":"b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24"} Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.096849 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.099336 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" event={"ID":"c1232df3-7af1-411f-9116-519761089007","Type":"ContainerStarted","Data":"7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd"} Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.099472 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.102138 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89ff2fc5-609f-4ca7-b997-9f8adfa5a221","Type":"ContainerStarted","Data":"80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293"} Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.105126 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5","Type":"ContainerStarted","Data":"858a4cc294f2673581b5056b6b3f2795b013fb5990368406beb8a506660b666f"} Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.155509 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" podStartSLOduration=3.625828727 podStartE2EDuration="4.155488811s" podCreationTimestamp="2026-01-30 08:48:07 +0000 UTC" firstStartedPulling="2026-01-30 08:48:08.576137578 +0000 UTC m=+1093.548449129" lastFinishedPulling="2026-01-30 08:48:09.105797662 +0000 UTC m=+1094.078109213" observedRunningTime="2026-01-30 08:48:11.150957838 +0000 UTC m=+1096.123269399" watchObservedRunningTime="2026-01-30 08:48:11.155488811 +0000 UTC m=+1096.127800362" Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.156362 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-fz6p5" podStartSLOduration=4.156355899 podStartE2EDuration="4.156355899s" podCreationTimestamp="2026-01-30 08:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:48:11.13197857 +0000 UTC m=+1096.104290121" watchObservedRunningTime="2026-01-30 08:48:11.156355899 +0000 UTC m=+1096.128667450" Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.114121 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"831717ab-1273-408c-9fdf-4cd5bd2d2bb9","Type":"ContainerStarted","Data":"09928b1251f5b16afbf883691fc0a650ba9ae547429f579784014910fa6b9200"} Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.149886 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.788833912 podStartE2EDuration="5.149846389s" podCreationTimestamp="2026-01-30 08:48:07 +0000 UTC" firstStartedPulling="2026-01-30 08:48:09.186903762 +0000 UTC m=+1094.159215313" lastFinishedPulling="2026-01-30 08:48:10.547916239 +0000 UTC m=+1095.520227790" observedRunningTime="2026-01-30 08:48:12.146538234 +0000 UTC m=+1097.118849805" watchObservedRunningTime="2026-01-30 08:48:12.149846389 +0000 UTC m=+1097.122157960" Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.761632 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-nbwlv"] Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.801336 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-98tgp"] Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.802953 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.823912 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.853561 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-98tgp"] Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.943498 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.943595 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl29t\" (UniqueName: \"kubernetes.io/projected/05274cdb-49de-4144-85ca-3d46e1790dab-kube-api-access-xl29t\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.943632 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.943678 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.943723 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-config\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.045457 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.045534 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl29t\" (UniqueName: \"kubernetes.io/projected/05274cdb-49de-4144-85ca-3d46e1790dab-kube-api-access-xl29t\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.045570 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.045598 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.045970 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-config\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.046388 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.046436 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.046702 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-config\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.047248 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.064022 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl29t\" (UniqueName: \"kubernetes.io/projected/05274cdb-49de-4144-85ca-3d46e1790dab-kube-api-access-xl29t\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.125496 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.131483 4758 generic.go:334] "Generic (PLEG): container finished" podID="1787d8b1-5b19-41e5-a66d-8375f9d5bb3f" containerID="e1f2f27ac70e2a7b40c793c7484fab4935efde62267c026bab00e4506643346f" exitCode=0 Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.131538 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f","Type":"ContainerDied","Data":"e1f2f27ac70e2a7b40c793c7484fab4935efde62267c026bab00e4506643346f"} Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.136095 4758 generic.go:334] "Generic (PLEG): container finished" podID="0a15517a-ff48-40d1-91b4-442bfef91fc1" containerID="a329555e5c83c8e76ae3384416532d1a40534ec90749967ee7e67ead47128aa7" exitCode=0 Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.136650 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" podUID="c1232df3-7af1-411f-9116-519761089007" containerName="dnsmasq-dns" containerID="cri-o://7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd" gracePeriod=10 Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.136149 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0a15517a-ff48-40d1-91b4-442bfef91fc1","Type":"ContainerDied","Data":"a329555e5c83c8e76ae3384416532d1a40534ec90749967ee7e67ead47128aa7"} Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.137193 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.618982 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.632017 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-98tgp"] Jan 30 08:48:13 crc kubenswrapper[4758]: W0130 08:48:13.643176 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05274cdb_49de_4144_85ca_3d46e1790dab.slice/crio-f990ab28ecc68fe1f154128716943b17b2550ded5f1eb119ad695ca6b95e4ded WatchSource:0}: Error finding container f990ab28ecc68fe1f154128716943b17b2550ded5f1eb119ad695ca6b95e4ded: Status 404 returned error can't find the container with id f990ab28ecc68fe1f154128716943b17b2550ded5f1eb119ad695ca6b95e4ded Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.771321 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skkbl\" (UniqueName: \"kubernetes.io/projected/c1232df3-7af1-411f-9116-519761089007-kube-api-access-skkbl\") pod \"c1232df3-7af1-411f-9116-519761089007\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.771619 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-ovsdbserver-sb\") pod \"c1232df3-7af1-411f-9116-519761089007\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.771674 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-config\") pod \"c1232df3-7af1-411f-9116-519761089007\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.771840 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-dns-svc\") pod \"c1232df3-7af1-411f-9116-519761089007\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.786680 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1232df3-7af1-411f-9116-519761089007-kube-api-access-skkbl" (OuterVolumeSpecName: "kube-api-access-skkbl") pod "c1232df3-7af1-411f-9116-519761089007" (UID: "c1232df3-7af1-411f-9116-519761089007"). InnerVolumeSpecName "kube-api-access-skkbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.863093 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1232df3-7af1-411f-9116-519761089007" (UID: "c1232df3-7af1-411f-9116-519761089007"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.876188 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skkbl\" (UniqueName: \"kubernetes.io/projected/c1232df3-7af1-411f-9116-519761089007-kube-api-access-skkbl\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.876432 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.890869 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c1232df3-7af1-411f-9116-519761089007" (UID: "c1232df3-7af1-411f-9116-519761089007"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.895281 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-config" (OuterVolumeSpecName: "config") pod "c1232df3-7af1-411f-9116-519761089007" (UID: "c1232df3-7af1-411f-9116-519761089007"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.977956 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.978223 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.991410 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 08:48:13 crc kubenswrapper[4758]: E0130 08:48:13.991725 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1232df3-7af1-411f-9116-519761089007" containerName="init" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.991740 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1232df3-7af1-411f-9116-519761089007" containerName="init" Jan 30 08:48:13 crc kubenswrapper[4758]: E0130 08:48:13.991768 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1232df3-7af1-411f-9116-519761089007" containerName="dnsmasq-dns" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.991792 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1232df3-7af1-411f-9116-519761089007" containerName="dnsmasq-dns" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.991958 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1232df3-7af1-411f-9116-519761089007" containerName="dnsmasq-dns" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.002228 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.005219 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.005546 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.006322 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-kt9hk" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.011938 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.023898 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.079304 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f978baf9-b7c0-4d25-8bca-e95a018ba2af-lock\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.079395 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.079441 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6fs9\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-kube-api-access-z6fs9\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.079479 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.079501 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f978baf9-b7c0-4d25-8bca-e95a018ba2af-cache\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.079531 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f978baf9-b7c0-4d25-8bca-e95a018ba2af-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.162358 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0a15517a-ff48-40d1-91b4-442bfef91fc1","Type":"ContainerStarted","Data":"e4d0eee0ac548178b87635cf87a3ec675a9b0ac0b019cf2ca01ee8b2beb1a802"} Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.167089 4758 generic.go:334] "Generic (PLEG): container finished" podID="c1232df3-7af1-411f-9116-519761089007" containerID="7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd" exitCode=0 Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.167412 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" event={"ID":"c1232df3-7af1-411f-9116-519761089007","Type":"ContainerDied","Data":"7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd"} Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.167619 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" event={"ID":"c1232df3-7af1-411f-9116-519761089007","Type":"ContainerDied","Data":"82d07fd150728b7c82f069c87ee62a0a60661daf49e51d27a49bbc40f9ce2ee9"} Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.167927 4758 scope.go:117] "RemoveContainer" containerID="7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.169838 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.171249 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" event={"ID":"05274cdb-49de-4144-85ca-3d46e1790dab","Type":"ContainerStarted","Data":"04574457799aeff7875482dc2fc9ef4ba7fb15bf31c64fb74962ff0458496cd3"} Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.171312 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" event={"ID":"05274cdb-49de-4144-85ca-3d46e1790dab","Type":"ContainerStarted","Data":"f990ab28ecc68fe1f154128716943b17b2550ded5f1eb119ad695ca6b95e4ded"} Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.180852 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f","Type":"ContainerStarted","Data":"dedea668e8b90301979f85e0e7e116a2091b284d9505a7c83aca302a2a909be8"} Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.187719 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f978baf9-b7c0-4d25-8bca-e95a018ba2af-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.187793 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f978baf9-b7c0-4d25-8bca-e95a018ba2af-lock\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.187873 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.187924 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6fs9\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-kube-api-access-z6fs9\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.187950 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.187976 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f978baf9-b7c0-4d25-8bca-e95a018ba2af-cache\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.188478 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f978baf9-b7c0-4d25-8bca-e95a018ba2af-cache\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.189089 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f978baf9-b7c0-4d25-8bca-e95a018ba2af-lock\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.189097 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.189390 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.189410 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.189468 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:48:14.68944029 +0000 UTC m=+1099.661751842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.205464 4758 scope.go:117] "RemoveContainer" containerID="b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.224659 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371991.630268 podStartE2EDuration="45.224507457s" podCreationTimestamp="2026-01-30 08:47:29 +0000 UTC" firstStartedPulling="2026-01-30 08:47:31.709367371 +0000 UTC m=+1056.681678922" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:48:14.198003581 +0000 UTC m=+1099.170315142" watchObservedRunningTime="2026-01-30 08:48:14.224507457 +0000 UTC m=+1099.196819008" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.224954 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f978baf9-b7c0-4d25-8bca-e95a018ba2af-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.232838 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6fs9\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-kube-api-access-z6fs9\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.281755 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.652890171 podStartE2EDuration="46.281729913s" podCreationTimestamp="2026-01-30 08:47:28 +0000 UTC" firstStartedPulling="2026-01-30 08:47:31.576404456 +0000 UTC m=+1056.548716007" lastFinishedPulling="2026-01-30 08:48:07.205244198 +0000 UTC m=+1092.177555749" observedRunningTime="2026-01-30 08:48:14.278914144 +0000 UTC m=+1099.251225695" watchObservedRunningTime="2026-01-30 08:48:14.281729913 +0000 UTC m=+1099.254041464" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.283879 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.284142 4758 scope.go:117] "RemoveContainer" containerID="7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd" Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.285674 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd\": container with ID starting with 7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd not found: ID does not exist" containerID="7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.285729 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd"} err="failed to get container status \"7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd\": rpc error: code = NotFound desc = could not find container \"7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd\": container with ID starting with 7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd not found: ID does not exist" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.285752 4758 scope.go:117] "RemoveContainer" containerID="b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c" Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.286152 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c\": container with ID starting with b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c not found: ID does not exist" containerID="b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.286269 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c"} err="failed to get container status \"b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c\": rpc error: code = NotFound desc = could not find container \"b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c\": container with ID starting with b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c not found: ID does not exist" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.345259 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-nbwlv"] Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.352427 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-nbwlv"] Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.530168 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-pwl7q"] Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.531445 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.542236 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.542929 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.543451 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.596225 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-swiftconf\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.596490 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-pwl7q"] Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.596982 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-combined-ca-bundle\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.603098 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-ring-data-devices\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.603383 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3da668f4-93c9-4c1a-b2b7-05ad516c0637-etc-swift\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.603511 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-scripts\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.603595 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gsfj\" (UniqueName: \"kubernetes.io/projected/3da668f4-93c9-4c1a-b2b7-05ad516c0637-kube-api-access-9gsfj\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.603764 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-dispersionconf\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.605745 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-pwl7q"] Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.607142 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-9gsfj ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-pwl7q" podUID="3da668f4-93c9-4c1a-b2b7-05ad516c0637" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705252 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-scripts\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705311 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gsfj\" (UniqueName: \"kubernetes.io/projected/3da668f4-93c9-4c1a-b2b7-05ad516c0637-kube-api-access-9gsfj\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705370 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-dispersionconf\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705397 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-swiftconf\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705440 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-combined-ca-bundle\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705461 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-ring-data-devices\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705494 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705524 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3da668f4-93c9-4c1a-b2b7-05ad516c0637-etc-swift\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705943 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3da668f4-93c9-4c1a-b2b7-05ad516c0637-etc-swift\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.706006 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-scripts\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.706478 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-ring-data-devices\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.707284 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.707418 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.707623 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:48:15.707598251 +0000 UTC m=+1100.679909862 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.710509 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-swiftconf\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.710901 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-dispersionconf\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.711829 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-combined-ca-bundle\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.729702 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gsfj\" (UniqueName: \"kubernetes.io/projected/3da668f4-93c9-4c1a-b2b7-05ad516c0637-kube-api-access-9gsfj\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.187808 4758 generic.go:334] "Generic (PLEG): container finished" podID="05274cdb-49de-4144-85ca-3d46e1790dab" containerID="04574457799aeff7875482dc2fc9ef4ba7fb15bf31c64fb74962ff0458496cd3" exitCode=0 Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.187866 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" event={"ID":"05274cdb-49de-4144-85ca-3d46e1790dab","Type":"ContainerDied","Data":"04574457799aeff7875482dc2fc9ef4ba7fb15bf31c64fb74962ff0458496cd3"} Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.190726 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.284718 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.417343 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gsfj\" (UniqueName: \"kubernetes.io/projected/3da668f4-93c9-4c1a-b2b7-05ad516c0637-kube-api-access-9gsfj\") pod \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.417655 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-swiftconf\") pod \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.417738 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-dispersionconf\") pod \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.417764 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-combined-ca-bundle\") pod \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.417831 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3da668f4-93c9-4c1a-b2b7-05ad516c0637-etc-swift\") pod \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.417870 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-ring-data-devices\") pod \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.417893 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-scripts\") pod \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.418801 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-scripts" (OuterVolumeSpecName: "scripts") pod "3da668f4-93c9-4c1a-b2b7-05ad516c0637" (UID: "3da668f4-93c9-4c1a-b2b7-05ad516c0637"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.419726 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da668f4-93c9-4c1a-b2b7-05ad516c0637-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3da668f4-93c9-4c1a-b2b7-05ad516c0637" (UID: "3da668f4-93c9-4c1a-b2b7-05ad516c0637"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.419969 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3da668f4-93c9-4c1a-b2b7-05ad516c0637" (UID: "3da668f4-93c9-4c1a-b2b7-05ad516c0637"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.423916 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3da668f4-93c9-4c1a-b2b7-05ad516c0637" (UID: "3da668f4-93c9-4c1a-b2b7-05ad516c0637"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.423998 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3da668f4-93c9-4c1a-b2b7-05ad516c0637" (UID: "3da668f4-93c9-4c1a-b2b7-05ad516c0637"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.429589 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3da668f4-93c9-4c1a-b2b7-05ad516c0637" (UID: "3da668f4-93c9-4c1a-b2b7-05ad516c0637"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.432793 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da668f4-93c9-4c1a-b2b7-05ad516c0637-kube-api-access-9gsfj" (OuterVolumeSpecName: "kube-api-access-9gsfj") pod "3da668f4-93c9-4c1a-b2b7-05ad516c0637" (UID: "3da668f4-93c9-4c1a-b2b7-05ad516c0637"). InnerVolumeSpecName "kube-api-access-9gsfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.520160 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gsfj\" (UniqueName: \"kubernetes.io/projected/3da668f4-93c9-4c1a-b2b7-05ad516c0637-kube-api-access-9gsfj\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.520200 4758 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.520211 4758 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.520223 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.520234 4758 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3da668f4-93c9-4c1a-b2b7-05ad516c0637-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.520245 4758 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.520256 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.723416 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:15 crc kubenswrapper[4758]: E0130 08:48:15.723641 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:48:15 crc kubenswrapper[4758]: E0130 08:48:15.723675 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:48:15 crc kubenswrapper[4758]: E0130 08:48:15.723744 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:48:17.723720456 +0000 UTC m=+1102.696032007 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.778509 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1232df3-7af1-411f-9116-519761089007" path="/var/lib/kubelet/pods/c1232df3-7af1-411f-9116-519761089007/volumes" Jan 30 08:48:16 crc kubenswrapper[4758]: I0130 08:48:16.198690 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:16 crc kubenswrapper[4758]: I0130 08:48:16.198694 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" event={"ID":"05274cdb-49de-4144-85ca-3d46e1790dab","Type":"ContainerStarted","Data":"fbe24d5e6be67695cb5ddcaebff818aae946840f68728ace533f14d150ec3201"} Jan 30 08:48:16 crc kubenswrapper[4758]: I0130 08:48:16.239330 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-pwl7q"] Jan 30 08:48:16 crc kubenswrapper[4758]: I0130 08:48:16.248619 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-pwl7q"] Jan 30 08:48:17 crc kubenswrapper[4758]: I0130 08:48:17.205578 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:17 crc kubenswrapper[4758]: I0130 08:48:17.753261 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:17 crc kubenswrapper[4758]: E0130 08:48:17.753445 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:48:17 crc kubenswrapper[4758]: E0130 08:48:17.753462 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:48:17 crc kubenswrapper[4758]: E0130 08:48:17.753513 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:48:21.753497785 +0000 UTC m=+1106.725809336 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:48:17 crc kubenswrapper[4758]: I0130 08:48:17.777544 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da668f4-93c9-4c1a-b2b7-05ad516c0637" path="/var/lib/kubelet/pods/3da668f4-93c9-4c1a-b2b7-05ad516c0637/volumes" Jan 30 08:48:18 crc kubenswrapper[4758]: I0130 08:48:18.331184 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:18 crc kubenswrapper[4758]: I0130 08:48:18.352378 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" podStartSLOduration=6.3523560660000005 podStartE2EDuration="6.352356066s" podCreationTimestamp="2026-01-30 08:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:48:17.233651754 +0000 UTC m=+1102.205963305" watchObservedRunningTime="2026-01-30 08:48:18.352356066 +0000 UTC m=+1103.324667617" Jan 30 08:48:19 crc kubenswrapper[4758]: I0130 08:48:19.784801 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 08:48:19 crc kubenswrapper[4758]: I0130 08:48:19.785167 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 08:48:20 crc kubenswrapper[4758]: I0130 08:48:20.639488 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 08:48:20 crc kubenswrapper[4758]: I0130 08:48:20.639532 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 08:48:21 crc kubenswrapper[4758]: I0130 08:48:21.551866 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 08:48:21 crc kubenswrapper[4758]: I0130 08:48:21.632797 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 08:48:21 crc kubenswrapper[4758]: I0130 08:48:21.817937 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:21 crc kubenswrapper[4758]: E0130 08:48:21.818228 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:48:21 crc kubenswrapper[4758]: E0130 08:48:21.818260 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:48:21 crc kubenswrapper[4758]: E0130 08:48:21.818322 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:48:29.818302092 +0000 UTC m=+1114.790613643 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:48:22 crc kubenswrapper[4758]: I0130 08:48:22.386959 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:48:22 crc kubenswrapper[4758]: I0130 08:48:22.387024 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.130253 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.213370 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fz6p5"] Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.213610 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-fz6p5" podUID="41049534-cd80-47a3-b923-be969750a8b9" containerName="dnsmasq-dns" containerID="cri-o://b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24" gracePeriod=10 Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.330503 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-fz6p5" podUID="41049534-cd80-47a3-b923-be969750a8b9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.724696 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.768999 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-config\") pod \"41049534-cd80-47a3-b923-be969750a8b9\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.769079 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmzbs\" (UniqueName: \"kubernetes.io/projected/41049534-cd80-47a3-b923-be969750a8b9-kube-api-access-nmzbs\") pod \"41049534-cd80-47a3-b923-be969750a8b9\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.769152 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-nb\") pod \"41049534-cd80-47a3-b923-be969750a8b9\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.769177 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-sb\") pod \"41049534-cd80-47a3-b923-be969750a8b9\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.769218 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-dns-svc\") pod \"41049534-cd80-47a3-b923-be969750a8b9\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.799949 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41049534-cd80-47a3-b923-be969750a8b9-kube-api-access-nmzbs" (OuterVolumeSpecName: "kube-api-access-nmzbs") pod "41049534-cd80-47a3-b923-be969750a8b9" (UID: "41049534-cd80-47a3-b923-be969750a8b9"). InnerVolumeSpecName "kube-api-access-nmzbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.817966 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "41049534-cd80-47a3-b923-be969750a8b9" (UID: "41049534-cd80-47a3-b923-be969750a8b9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.826376 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "41049534-cd80-47a3-b923-be969750a8b9" (UID: "41049534-cd80-47a3-b923-be969750a8b9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.827971 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-config" (OuterVolumeSpecName: "config") pod "41049534-cd80-47a3-b923-be969750a8b9" (UID: "41049534-cd80-47a3-b923-be969750a8b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.840726 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "41049534-cd80-47a3-b923-be969750a8b9" (UID: "41049534-cd80-47a3-b923-be969750a8b9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.869796 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.873293 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmzbs\" (UniqueName: \"kubernetes.io/projected/41049534-cd80-47a3-b923-be969750a8b9-kube-api-access-nmzbs\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.873331 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.873343 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.873352 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.873361 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.946649 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.258848 4758 generic.go:334] "Generic (PLEG): container finished" podID="41049534-cd80-47a3-b923-be969750a8b9" containerID="b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24" exitCode=0 Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.258943 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fz6p5" event={"ID":"41049534-cd80-47a3-b923-be969750a8b9","Type":"ContainerDied","Data":"b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24"} Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.258999 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fz6p5" event={"ID":"41049534-cd80-47a3-b923-be969750a8b9","Type":"ContainerDied","Data":"31513955cec6f870378becf0d67edc5a8694ab95bcb31c44777622788df86a4b"} Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.259024 4758 scope.go:117] "RemoveContainer" containerID="b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24" Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.259901 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.294745 4758 scope.go:117] "RemoveContainer" containerID="b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105" Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.309173 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fz6p5"] Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.315973 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fz6p5"] Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.332625 4758 scope.go:117] "RemoveContainer" containerID="b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24" Jan 30 08:48:24 crc kubenswrapper[4758]: E0130 08:48:24.333160 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24\": container with ID starting with b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24 not found: ID does not exist" containerID="b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24" Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.333195 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24"} err="failed to get container status \"b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24\": rpc error: code = NotFound desc = could not find container \"b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24\": container with ID starting with b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24 not found: ID does not exist" Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.333228 4758 scope.go:117] "RemoveContainer" containerID="b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105" Jan 30 08:48:24 crc kubenswrapper[4758]: E0130 08:48:24.333491 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105\": container with ID starting with b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105 not found: ID does not exist" containerID="b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105" Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.333560 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105"} err="failed to get container status \"b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105\": rpc error: code = NotFound desc = could not find container \"b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105\": container with ID starting with b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105 not found: ID does not exist" Jan 30 08:48:25 crc kubenswrapper[4758]: I0130 08:48:25.778323 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41049534-cd80-47a3-b923-be969750a8b9" path="/var/lib/kubelet/pods/41049534-cd80-47a3-b923-be969750a8b9/volumes" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.174119 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-cps5h"] Jan 30 08:48:28 crc kubenswrapper[4758]: E0130 08:48:28.174862 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41049534-cd80-47a3-b923-be969750a8b9" containerName="dnsmasq-dns" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.174884 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="41049534-cd80-47a3-b923-be969750a8b9" containerName="dnsmasq-dns" Jan 30 08:48:28 crc kubenswrapper[4758]: E0130 08:48:28.174937 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41049534-cd80-47a3-b923-be969750a8b9" containerName="init" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.174946 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="41049534-cd80-47a3-b923-be969750a8b9" containerName="init" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.175174 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="41049534-cd80-47a3-b923-be969750a8b9" containerName="dnsmasq-dns" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.175802 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.177704 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.192969 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cps5h"] Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.243071 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b3ba93-ce04-4374-8fed-db25eb3b3065-operator-scripts\") pod \"root-account-create-update-cps5h\" (UID: \"88b3ba93-ce04-4374-8fed-db25eb3b3065\") " pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.243112 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kxjx\" (UniqueName: \"kubernetes.io/projected/88b3ba93-ce04-4374-8fed-db25eb3b3065-kube-api-access-6kxjx\") pod \"root-account-create-update-cps5h\" (UID: \"88b3ba93-ce04-4374-8fed-db25eb3b3065\") " pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.344308 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b3ba93-ce04-4374-8fed-db25eb3b3065-operator-scripts\") pod \"root-account-create-update-cps5h\" (UID: \"88b3ba93-ce04-4374-8fed-db25eb3b3065\") " pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.344400 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kxjx\" (UniqueName: \"kubernetes.io/projected/88b3ba93-ce04-4374-8fed-db25eb3b3065-kube-api-access-6kxjx\") pod \"root-account-create-update-cps5h\" (UID: \"88b3ba93-ce04-4374-8fed-db25eb3b3065\") " pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.346225 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b3ba93-ce04-4374-8fed-db25eb3b3065-operator-scripts\") pod \"root-account-create-update-cps5h\" (UID: \"88b3ba93-ce04-4374-8fed-db25eb3b3065\") " pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.372381 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kxjx\" (UniqueName: \"kubernetes.io/projected/88b3ba93-ce04-4374-8fed-db25eb3b3065-kube-api-access-6kxjx\") pod \"root-account-create-update-cps5h\" (UID: \"88b3ba93-ce04-4374-8fed-db25eb3b3065\") " pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.409237 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.495098 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.968752 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cps5h"] Jan 30 08:48:29 crc kubenswrapper[4758]: I0130 08:48:29.292375 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cps5h" event={"ID":"88b3ba93-ce04-4374-8fed-db25eb3b3065","Type":"ContainerStarted","Data":"183015e47f5f83a8e3749301c9da8e5904ca92fc6fc6359bf8ab059b1e015b60"} Jan 30 08:48:29 crc kubenswrapper[4758]: I0130 08:48:29.292661 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cps5h" event={"ID":"88b3ba93-ce04-4374-8fed-db25eb3b3065","Type":"ContainerStarted","Data":"8e847bc505b4253fd572f62d30b4950f0e0d038f85122290c011602277ae9076"} Jan 30 08:48:29 crc kubenswrapper[4758]: I0130 08:48:29.309498 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-cps5h" podStartSLOduration=1.309474818 podStartE2EDuration="1.309474818s" podCreationTimestamp="2026-01-30 08:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:48:29.307154454 +0000 UTC m=+1114.279466025" watchObservedRunningTime="2026-01-30 08:48:29.309474818 +0000 UTC m=+1114.281786369" Jan 30 08:48:29 crc kubenswrapper[4758]: I0130 08:48:29.869162 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:29 crc kubenswrapper[4758]: E0130 08:48:29.869393 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:48:29 crc kubenswrapper[4758]: E0130 08:48:29.869427 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:48:29 crc kubenswrapper[4758]: E0130 08:48:29.869489 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:48:45.869471404 +0000 UTC m=+1130.841782955 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.300143 4758 generic.go:334] "Generic (PLEG): container finished" podID="88b3ba93-ce04-4374-8fed-db25eb3b3065" containerID="183015e47f5f83a8e3749301c9da8e5904ca92fc6fc6359bf8ab059b1e015b60" exitCode=0 Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.300184 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cps5h" event={"ID":"88b3ba93-ce04-4374-8fed-db25eb3b3065","Type":"ContainerDied","Data":"183015e47f5f83a8e3749301c9da8e5904ca92fc6fc6359bf8ab059b1e015b60"} Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.406210 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8r78c"] Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.407130 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.453617 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8r78c"] Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.478723 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wm8l\" (UniqueName: \"kubernetes.io/projected/a04121a4-3ab4-48c5-903e-8d0002771ff7-kube-api-access-5wm8l\") pod \"keystone-db-create-8r78c\" (UID: \"a04121a4-3ab4-48c5-903e-8d0002771ff7\") " pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.478799 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04121a4-3ab4-48c5-903e-8d0002771ff7-operator-scripts\") pod \"keystone-db-create-8r78c\" (UID: \"a04121a4-3ab4-48c5-903e-8d0002771ff7\") " pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.580191 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wm8l\" (UniqueName: \"kubernetes.io/projected/a04121a4-3ab4-48c5-903e-8d0002771ff7-kube-api-access-5wm8l\") pod \"keystone-db-create-8r78c\" (UID: \"a04121a4-3ab4-48c5-903e-8d0002771ff7\") " pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.580564 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04121a4-3ab4-48c5-903e-8d0002771ff7-operator-scripts\") pod \"keystone-db-create-8r78c\" (UID: \"a04121a4-3ab4-48c5-903e-8d0002771ff7\") " pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.581517 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04121a4-3ab4-48c5-903e-8d0002771ff7-operator-scripts\") pod \"keystone-db-create-8r78c\" (UID: \"a04121a4-3ab4-48c5-903e-8d0002771ff7\") " pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.612164 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wm8l\" (UniqueName: \"kubernetes.io/projected/a04121a4-3ab4-48c5-903e-8d0002771ff7-kube-api-access-5wm8l\") pod \"keystone-db-create-8r78c\" (UID: \"a04121a4-3ab4-48c5-903e-8d0002771ff7\") " pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.614191 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d642-account-create-update-zwktw"] Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.615154 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.619222 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.635863 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d642-account-create-update-zwktw"] Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.682384 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq5wz\" (UniqueName: \"kubernetes.io/projected/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-kube-api-access-jq5wz\") pod \"keystone-d642-account-create-update-zwktw\" (UID: \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\") " pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.683789 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-operator-scripts\") pod \"keystone-d642-account-create-update-zwktw\" (UID: \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\") " pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.738886 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.786003 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq5wz\" (UniqueName: \"kubernetes.io/projected/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-kube-api-access-jq5wz\") pod \"keystone-d642-account-create-update-zwktw\" (UID: \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\") " pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.786132 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-operator-scripts\") pod \"keystone-d642-account-create-update-zwktw\" (UID: \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\") " pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.786879 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-operator-scripts\") pod \"keystone-d642-account-create-update-zwktw\" (UID: \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\") " pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.809529 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq5wz\" (UniqueName: \"kubernetes.io/projected/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-kube-api-access-jq5wz\") pod \"keystone-d642-account-create-update-zwktw\" (UID: \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\") " pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.959181 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-vv7kf"] Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.960900 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.972509 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vv7kf"] Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.984413 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.068601 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2wfwb"] Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.079735 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.086178 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-72be-account-create-update-gphrs"] Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.087106 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.089432 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.096551 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2wfwb"] Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.103370 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmkwp\" (UniqueName: \"kubernetes.io/projected/418d35e8-ad4e-4051-a0f1-fc9179300441-kube-api-access-cmkwp\") pod \"placement-db-create-vv7kf\" (UID: \"418d35e8-ad4e-4051-a0f1-fc9179300441\") " pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.103451 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/418d35e8-ad4e-4051-a0f1-fc9179300441-operator-scripts\") pod \"placement-db-create-vv7kf\" (UID: \"418d35e8-ad4e-4051-a0f1-fc9179300441\") " pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.115490 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-72be-account-create-update-gphrs"] Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.207408 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2121970a-59f7-4943-ac1e-fa675e5eef8e-operator-scripts\") pod \"placement-72be-account-create-update-gphrs\" (UID: \"2121970a-59f7-4943-ac1e-fa675e5eef8e\") " pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.207511 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f506e0d-da4f-4243-923b-f7e102fafd92-operator-scripts\") pod \"glance-db-create-2wfwb\" (UID: \"2f506e0d-da4f-4243-923b-f7e102fafd92\") " pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.207629 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wlpx\" (UniqueName: \"kubernetes.io/projected/2f506e0d-da4f-4243-923b-f7e102fafd92-kube-api-access-8wlpx\") pod \"glance-db-create-2wfwb\" (UID: \"2f506e0d-da4f-4243-923b-f7e102fafd92\") " pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.207668 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmkwp\" (UniqueName: \"kubernetes.io/projected/418d35e8-ad4e-4051-a0f1-fc9179300441-kube-api-access-cmkwp\") pod \"placement-db-create-vv7kf\" (UID: \"418d35e8-ad4e-4051-a0f1-fc9179300441\") " pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.207757 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/418d35e8-ad4e-4051-a0f1-fc9179300441-operator-scripts\") pod \"placement-db-create-vv7kf\" (UID: \"418d35e8-ad4e-4051-a0f1-fc9179300441\") " pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.207802 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hllsb\" (UniqueName: \"kubernetes.io/projected/2121970a-59f7-4943-ac1e-fa675e5eef8e-kube-api-access-hllsb\") pod \"placement-72be-account-create-update-gphrs\" (UID: \"2121970a-59f7-4943-ac1e-fa675e5eef8e\") " pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.212328 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/418d35e8-ad4e-4051-a0f1-fc9179300441-operator-scripts\") pod \"placement-db-create-vv7kf\" (UID: \"418d35e8-ad4e-4051-a0f1-fc9179300441\") " pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.222959 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-50ef-account-create-update-bskhv"] Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.224017 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.228353 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmkwp\" (UniqueName: \"kubernetes.io/projected/418d35e8-ad4e-4051-a0f1-fc9179300441-kube-api-access-cmkwp\") pod \"placement-db-create-vv7kf\" (UID: \"418d35e8-ad4e-4051-a0f1-fc9179300441\") " pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.229239 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.236870 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-50ef-account-create-update-bskhv"] Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.277353 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8r78c"] Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.292123 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.310471 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hllsb\" (UniqueName: \"kubernetes.io/projected/2121970a-59f7-4943-ac1e-fa675e5eef8e-kube-api-access-hllsb\") pod \"placement-72be-account-create-update-gphrs\" (UID: \"2121970a-59f7-4943-ac1e-fa675e5eef8e\") " pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.310563 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e9c3b23-6678-42de-923a-27ebb5fb61a3-operator-scripts\") pod \"glance-50ef-account-create-update-bskhv\" (UID: \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\") " pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.310670 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-275pt\" (UniqueName: \"kubernetes.io/projected/1e9c3b23-6678-42de-923a-27ebb5fb61a3-kube-api-access-275pt\") pod \"glance-50ef-account-create-update-bskhv\" (UID: \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\") " pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.310790 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2121970a-59f7-4943-ac1e-fa675e5eef8e-operator-scripts\") pod \"placement-72be-account-create-update-gphrs\" (UID: \"2121970a-59f7-4943-ac1e-fa675e5eef8e\") " pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.310937 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f506e0d-da4f-4243-923b-f7e102fafd92-operator-scripts\") pod \"glance-db-create-2wfwb\" (UID: \"2f506e0d-da4f-4243-923b-f7e102fafd92\") " pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.311108 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wlpx\" (UniqueName: \"kubernetes.io/projected/2f506e0d-da4f-4243-923b-f7e102fafd92-kube-api-access-8wlpx\") pod \"glance-db-create-2wfwb\" (UID: \"2f506e0d-da4f-4243-923b-f7e102fafd92\") " pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.312227 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2121970a-59f7-4943-ac1e-fa675e5eef8e-operator-scripts\") pod \"placement-72be-account-create-update-gphrs\" (UID: \"2121970a-59f7-4943-ac1e-fa675e5eef8e\") " pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.312726 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f506e0d-da4f-4243-923b-f7e102fafd92-operator-scripts\") pod \"glance-db-create-2wfwb\" (UID: \"2f506e0d-da4f-4243-923b-f7e102fafd92\") " pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.332716 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wlpx\" (UniqueName: \"kubernetes.io/projected/2f506e0d-da4f-4243-923b-f7e102fafd92-kube-api-access-8wlpx\") pod \"glance-db-create-2wfwb\" (UID: \"2f506e0d-da4f-4243-923b-f7e102fafd92\") " pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.333418 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hllsb\" (UniqueName: \"kubernetes.io/projected/2121970a-59f7-4943-ac1e-fa675e5eef8e-kube-api-access-hllsb\") pod \"placement-72be-account-create-update-gphrs\" (UID: \"2121970a-59f7-4943-ac1e-fa675e5eef8e\") " pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.336706 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8r78c" event={"ID":"a04121a4-3ab4-48c5-903e-8d0002771ff7","Type":"ContainerStarted","Data":"e494bb189e2d0ac5e4da8332fef46a9f2687ebe52e965755cf5c04ba2a6d0947"} Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.413689 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e9c3b23-6678-42de-923a-27ebb5fb61a3-operator-scripts\") pod \"glance-50ef-account-create-update-bskhv\" (UID: \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\") " pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.413760 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-275pt\" (UniqueName: \"kubernetes.io/projected/1e9c3b23-6678-42de-923a-27ebb5fb61a3-kube-api-access-275pt\") pod \"glance-50ef-account-create-update-bskhv\" (UID: \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\") " pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.414883 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e9c3b23-6678-42de-923a-27ebb5fb61a3-operator-scripts\") pod \"glance-50ef-account-create-update-bskhv\" (UID: \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\") " pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.420965 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.425052 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.431821 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-275pt\" (UniqueName: \"kubernetes.io/projected/1e9c3b23-6678-42de-923a-27ebb5fb61a3-kube-api-access-275pt\") pod \"glance-50ef-account-create-update-bskhv\" (UID: \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\") " pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.517964 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-bg2b8" podUID="78294966-2fbd-4ed5-8d2a-2096ac07dac1" containerName="ovn-controller" probeResult="failure" output=< Jan 30 08:48:31 crc kubenswrapper[4758]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 08:48:31 crc kubenswrapper[4758]: > Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.561435 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.596533 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d642-account-create-update-zwktw"] Jan 30 08:48:31 crc kubenswrapper[4758]: W0130 08:48:31.621199 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod585a6b3e_695a_4ddc_91a6_9b39b241ffd0.slice/crio-5eacfb487ac0c827b02d442b7a7c9a36a5772bb34d2fc875864664f17c7f68b6 WatchSource:0}: Error finding container 5eacfb487ac0c827b02d442b7a7c9a36a5772bb34d2fc875864664f17c7f68b6: Status 404 returned error can't find the container with id 5eacfb487ac0c827b02d442b7a7c9a36a5772bb34d2fc875864664f17c7f68b6 Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.871469 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vv7kf"] Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.964222 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.038004 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kxjx\" (UniqueName: \"kubernetes.io/projected/88b3ba93-ce04-4374-8fed-db25eb3b3065-kube-api-access-6kxjx\") pod \"88b3ba93-ce04-4374-8fed-db25eb3b3065\" (UID: \"88b3ba93-ce04-4374-8fed-db25eb3b3065\") " Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.038095 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b3ba93-ce04-4374-8fed-db25eb3b3065-operator-scripts\") pod \"88b3ba93-ce04-4374-8fed-db25eb3b3065\" (UID: \"88b3ba93-ce04-4374-8fed-db25eb3b3065\") " Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.039829 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88b3ba93-ce04-4374-8fed-db25eb3b3065-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88b3ba93-ce04-4374-8fed-db25eb3b3065" (UID: "88b3ba93-ce04-4374-8fed-db25eb3b3065"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.045598 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88b3ba93-ce04-4374-8fed-db25eb3b3065-kube-api-access-6kxjx" (OuterVolumeSpecName: "kube-api-access-6kxjx") pod "88b3ba93-ce04-4374-8fed-db25eb3b3065" (UID: "88b3ba93-ce04-4374-8fed-db25eb3b3065"). InnerVolumeSpecName "kube-api-access-6kxjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.141429 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b3ba93-ce04-4374-8fed-db25eb3b3065-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.141468 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kxjx\" (UniqueName: \"kubernetes.io/projected/88b3ba93-ce04-4374-8fed-db25eb3b3065-kube-api-access-6kxjx\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.196049 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2wfwb"] Jan 30 08:48:32 crc kubenswrapper[4758]: W0130 08:48:32.200930 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2121970a_59f7_4943_ac1e_fa675e5eef8e.slice/crio-145e4e1ba4d4aa1093bcc2768b2faa2eacc47565152f77014a3c9ea2fe4efec5 WatchSource:0}: Error finding container 145e4e1ba4d4aa1093bcc2768b2faa2eacc47565152f77014a3c9ea2fe4efec5: Status 404 returned error can't find the container with id 145e4e1ba4d4aa1093bcc2768b2faa2eacc47565152f77014a3c9ea2fe4efec5 Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.211840 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-72be-account-create-update-gphrs"] Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.353434 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2wfwb" event={"ID":"2f506e0d-da4f-4243-923b-f7e102fafd92","Type":"ContainerStarted","Data":"61116589352b40224a96215bd65987be6542324de26a624da86a1ed3cc2be782"} Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.355739 4758 generic.go:334] "Generic (PLEG): container finished" podID="585a6b3e-695a-4ddc-91a6-9b39b241ffd0" containerID="a6e4fb8d0b259571728fd753067c06a5a86fb0519ae153af2e5b3b44c3d06d59" exitCode=0 Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.355800 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d642-account-create-update-zwktw" event={"ID":"585a6b3e-695a-4ddc-91a6-9b39b241ffd0","Type":"ContainerDied","Data":"a6e4fb8d0b259571728fd753067c06a5a86fb0519ae153af2e5b3b44c3d06d59"} Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.355825 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d642-account-create-update-zwktw" event={"ID":"585a6b3e-695a-4ddc-91a6-9b39b241ffd0","Type":"ContainerStarted","Data":"5eacfb487ac0c827b02d442b7a7c9a36a5772bb34d2fc875864664f17c7f68b6"} Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.359514 4758 generic.go:334] "Generic (PLEG): container finished" podID="a04121a4-3ab4-48c5-903e-8d0002771ff7" containerID="26c6c01e8de6266c91045a1615b456f02f5eaff630949bebc138b39fd169f580" exitCode=0 Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.359591 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8r78c" event={"ID":"a04121a4-3ab4-48c5-903e-8d0002771ff7","Type":"ContainerDied","Data":"26c6c01e8de6266c91045a1615b456f02f5eaff630949bebc138b39fd169f580"} Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.362304 4758 generic.go:334] "Generic (PLEG): container finished" podID="418d35e8-ad4e-4051-a0f1-fc9179300441" containerID="79153cef375b67170413da87462ce99cc5566739d6bdf2f6c1513a6b0ebe4db9" exitCode=0 Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.362375 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vv7kf" event={"ID":"418d35e8-ad4e-4051-a0f1-fc9179300441","Type":"ContainerDied","Data":"79153cef375b67170413da87462ce99cc5566739d6bdf2f6c1513a6b0ebe4db9"} Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.362413 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vv7kf" event={"ID":"418d35e8-ad4e-4051-a0f1-fc9179300441","Type":"ContainerStarted","Data":"f63a7aacdece4e18ffb5932d48f95957334f667a295b639f15c326a86a477819"} Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.364242 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cps5h" event={"ID":"88b3ba93-ce04-4374-8fed-db25eb3b3065","Type":"ContainerDied","Data":"8e847bc505b4253fd572f62d30b4950f0e0d038f85122290c011602277ae9076"} Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.364270 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e847bc505b4253fd572f62d30b4950f0e0d038f85122290c011602277ae9076" Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.364350 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.367207 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-72be-account-create-update-gphrs" event={"ID":"2121970a-59f7-4943-ac1e-fa675e5eef8e","Type":"ContainerStarted","Data":"145e4e1ba4d4aa1093bcc2768b2faa2eacc47565152f77014a3c9ea2fe4efec5"} Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.398267 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-50ef-account-create-update-bskhv"] Jan 30 08:48:33 crc kubenswrapper[4758]: I0130 08:48:33.380010 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-72be-account-create-update-gphrs" event={"ID":"2121970a-59f7-4943-ac1e-fa675e5eef8e","Type":"ContainerStarted","Data":"1b0811ae6d53b97ab814b4974446d3583f0018b01ea502fb69edd27ab1156add"} Jan 30 08:48:33 crc kubenswrapper[4758]: I0130 08:48:33.384360 4758 generic.go:334] "Generic (PLEG): container finished" podID="2f506e0d-da4f-4243-923b-f7e102fafd92" containerID="aabb12ec35a8ae64e19c55b39db2dfd8844c3fe734c3d7e0940dfeeffb34c9d1" exitCode=0 Jan 30 08:48:33 crc kubenswrapper[4758]: I0130 08:48:33.384464 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2wfwb" event={"ID":"2f506e0d-da4f-4243-923b-f7e102fafd92","Type":"ContainerDied","Data":"aabb12ec35a8ae64e19c55b39db2dfd8844c3fe734c3d7e0940dfeeffb34c9d1"} Jan 30 08:48:33 crc kubenswrapper[4758]: I0130 08:48:33.386543 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50ef-account-create-update-bskhv" event={"ID":"1e9c3b23-6678-42de-923a-27ebb5fb61a3","Type":"ContainerStarted","Data":"daf0e92dedcf6f1f855f17b37c2a9bda8d265d7523eda70616bc5f00569e869a"} Jan 30 08:48:33 crc kubenswrapper[4758]: I0130 08:48:33.386577 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50ef-account-create-update-bskhv" event={"ID":"1e9c3b23-6678-42de-923a-27ebb5fb61a3","Type":"ContainerStarted","Data":"c65be4ec5473b830e36189ebc74b7fa4306eae2315b26780268f290367ce6c90"} Jan 30 08:48:33 crc kubenswrapper[4758]: I0130 08:48:33.469788 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-72be-account-create-update-gphrs" podStartSLOduration=2.46976517 podStartE2EDuration="2.46976517s" podCreationTimestamp="2026-01-30 08:48:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:48:33.410542657 +0000 UTC m=+1118.382854208" watchObservedRunningTime="2026-01-30 08:48:33.46976517 +0000 UTC m=+1118.442076721" Jan 30 08:48:33 crc kubenswrapper[4758]: I0130 08:48:33.483568 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-50ef-account-create-update-bskhv" podStartSLOduration=2.4835527170000002 podStartE2EDuration="2.483552717s" podCreationTimestamp="2026-01-30 08:48:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:48:33.467596431 +0000 UTC m=+1118.439907982" watchObservedRunningTime="2026-01-30 08:48:33.483552717 +0000 UTC m=+1118.455864268" Jan 30 08:48:33 crc kubenswrapper[4758]: I0130 08:48:33.986373 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.107160 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-operator-scripts\") pod \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\" (UID: \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\") " Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.107259 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq5wz\" (UniqueName: \"kubernetes.io/projected/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-kube-api-access-jq5wz\") pod \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\" (UID: \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\") " Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.108128 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "585a6b3e-695a-4ddc-91a6-9b39b241ffd0" (UID: "585a6b3e-695a-4ddc-91a6-9b39b241ffd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.113907 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-kube-api-access-jq5wz" (OuterVolumeSpecName: "kube-api-access-jq5wz") pod "585a6b3e-695a-4ddc-91a6-9b39b241ffd0" (UID: "585a6b3e-695a-4ddc-91a6-9b39b241ffd0"). InnerVolumeSpecName "kube-api-access-jq5wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.149571 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.166586 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.198255 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cps5h"] Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.206838 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-cps5h"] Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.209812 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.209845 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq5wz\" (UniqueName: \"kubernetes.io/projected/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-kube-api-access-jq5wz\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.311250 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/418d35e8-ad4e-4051-a0f1-fc9179300441-operator-scripts\") pod \"418d35e8-ad4e-4051-a0f1-fc9179300441\" (UID: \"418d35e8-ad4e-4051-a0f1-fc9179300441\") " Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.311729 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/418d35e8-ad4e-4051-a0f1-fc9179300441-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "418d35e8-ad4e-4051-a0f1-fc9179300441" (UID: "418d35e8-ad4e-4051-a0f1-fc9179300441"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.311738 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmkwp\" (UniqueName: \"kubernetes.io/projected/418d35e8-ad4e-4051-a0f1-fc9179300441-kube-api-access-cmkwp\") pod \"418d35e8-ad4e-4051-a0f1-fc9179300441\" (UID: \"418d35e8-ad4e-4051-a0f1-fc9179300441\") " Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.311871 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wm8l\" (UniqueName: \"kubernetes.io/projected/a04121a4-3ab4-48c5-903e-8d0002771ff7-kube-api-access-5wm8l\") pod \"a04121a4-3ab4-48c5-903e-8d0002771ff7\" (UID: \"a04121a4-3ab4-48c5-903e-8d0002771ff7\") " Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.311948 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04121a4-3ab4-48c5-903e-8d0002771ff7-operator-scripts\") pod \"a04121a4-3ab4-48c5-903e-8d0002771ff7\" (UID: \"a04121a4-3ab4-48c5-903e-8d0002771ff7\") " Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.312489 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/418d35e8-ad4e-4051-a0f1-fc9179300441-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.313209 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a04121a4-3ab4-48c5-903e-8d0002771ff7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a04121a4-3ab4-48c5-903e-8d0002771ff7" (UID: "a04121a4-3ab4-48c5-903e-8d0002771ff7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.316030 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04121a4-3ab4-48c5-903e-8d0002771ff7-kube-api-access-5wm8l" (OuterVolumeSpecName: "kube-api-access-5wm8l") pod "a04121a4-3ab4-48c5-903e-8d0002771ff7" (UID: "a04121a4-3ab4-48c5-903e-8d0002771ff7"). InnerVolumeSpecName "kube-api-access-5wm8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.316524 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/418d35e8-ad4e-4051-a0f1-fc9179300441-kube-api-access-cmkwp" (OuterVolumeSpecName: "kube-api-access-cmkwp") pod "418d35e8-ad4e-4051-a0f1-fc9179300441" (UID: "418d35e8-ad4e-4051-a0f1-fc9179300441"). InnerVolumeSpecName "kube-api-access-cmkwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.397875 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d642-account-create-update-zwktw" event={"ID":"585a6b3e-695a-4ddc-91a6-9b39b241ffd0","Type":"ContainerDied","Data":"5eacfb487ac0c827b02d442b7a7c9a36a5772bb34d2fc875864664f17c7f68b6"} Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.397915 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eacfb487ac0c827b02d442b7a7c9a36a5772bb34d2fc875864664f17c7f68b6" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.399163 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.399657 4758 generic.go:334] "Generic (PLEG): container finished" podID="1e9c3b23-6678-42de-923a-27ebb5fb61a3" containerID="daf0e92dedcf6f1f855f17b37c2a9bda8d265d7523eda70616bc5f00569e869a" exitCode=0 Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.399700 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50ef-account-create-update-bskhv" event={"ID":"1e9c3b23-6678-42de-923a-27ebb5fb61a3","Type":"ContainerDied","Data":"daf0e92dedcf6f1f855f17b37c2a9bda8d265d7523eda70616bc5f00569e869a"} Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.401854 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.402447 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8r78c" event={"ID":"a04121a4-3ab4-48c5-903e-8d0002771ff7","Type":"ContainerDied","Data":"e494bb189e2d0ac5e4da8332fef46a9f2687ebe52e965755cf5c04ba2a6d0947"} Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.402580 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e494bb189e2d0ac5e4da8332fef46a9f2687ebe52e965755cf5c04ba2a6d0947" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.407371 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vv7kf" event={"ID":"418d35e8-ad4e-4051-a0f1-fc9179300441","Type":"ContainerDied","Data":"f63a7aacdece4e18ffb5932d48f95957334f667a295b639f15c326a86a477819"} Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.407415 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f63a7aacdece4e18ffb5932d48f95957334f667a295b639f15c326a86a477819" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.407581 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.414355 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04121a4-3ab4-48c5-903e-8d0002771ff7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.414392 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmkwp\" (UniqueName: \"kubernetes.io/projected/418d35e8-ad4e-4051-a0f1-fc9179300441-kube-api-access-cmkwp\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.414407 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wm8l\" (UniqueName: \"kubernetes.io/projected/a04121a4-3ab4-48c5-903e-8d0002771ff7-kube-api-access-5wm8l\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.416601 4758 generic.go:334] "Generic (PLEG): container finished" podID="2121970a-59f7-4943-ac1e-fa675e5eef8e" containerID="1b0811ae6d53b97ab814b4974446d3583f0018b01ea502fb69edd27ab1156add" exitCode=0 Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.416735 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-72be-account-create-update-gphrs" event={"ID":"2121970a-59f7-4943-ac1e-fa675e5eef8e","Type":"ContainerDied","Data":"1b0811ae6d53b97ab814b4974446d3583f0018b01ea502fb69edd27ab1156add"} Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.772974 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.941661 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f506e0d-da4f-4243-923b-f7e102fafd92-operator-scripts\") pod \"2f506e0d-da4f-4243-923b-f7e102fafd92\" (UID: \"2f506e0d-da4f-4243-923b-f7e102fafd92\") " Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.941824 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wlpx\" (UniqueName: \"kubernetes.io/projected/2f506e0d-da4f-4243-923b-f7e102fafd92-kube-api-access-8wlpx\") pod \"2f506e0d-da4f-4243-923b-f7e102fafd92\" (UID: \"2f506e0d-da4f-4243-923b-f7e102fafd92\") " Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.942245 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f506e0d-da4f-4243-923b-f7e102fafd92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f506e0d-da4f-4243-923b-f7e102fafd92" (UID: "2f506e0d-da4f-4243-923b-f7e102fafd92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.944858 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f506e0d-da4f-4243-923b-f7e102fafd92-kube-api-access-8wlpx" (OuterVolumeSpecName: "kube-api-access-8wlpx") pod "2f506e0d-da4f-4243-923b-f7e102fafd92" (UID: "2f506e0d-da4f-4243-923b-f7e102fafd92"). InnerVolumeSpecName "kube-api-access-8wlpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:35 crc kubenswrapper[4758]: I0130 08:48:35.044031 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f506e0d-da4f-4243-923b-f7e102fafd92-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:35 crc kubenswrapper[4758]: I0130 08:48:35.044092 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wlpx\" (UniqueName: \"kubernetes.io/projected/2f506e0d-da4f-4243-923b-f7e102fafd92-kube-api-access-8wlpx\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:35 crc kubenswrapper[4758]: I0130 08:48:35.427671 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2wfwb" event={"ID":"2f506e0d-da4f-4243-923b-f7e102fafd92","Type":"ContainerDied","Data":"61116589352b40224a96215bd65987be6542324de26a624da86a1ed3cc2be782"} Jan 30 08:48:35 crc kubenswrapper[4758]: I0130 08:48:35.427991 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61116589352b40224a96215bd65987be6542324de26a624da86a1ed3cc2be782" Jan 30 08:48:35 crc kubenswrapper[4758]: I0130 08:48:35.429144 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:35 crc kubenswrapper[4758]: I0130 08:48:35.779686 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88b3ba93-ce04-4374-8fed-db25eb3b3065" path="/var/lib/kubelet/pods/88b3ba93-ce04-4374-8fed-db25eb3b3065/volumes" Jan 30 08:48:35 crc kubenswrapper[4758]: I0130 08:48:35.892338 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:35 crc kubenswrapper[4758]: I0130 08:48:35.897750 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.063569 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e9c3b23-6678-42de-923a-27ebb5fb61a3-operator-scripts\") pod \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\" (UID: \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\") " Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.063639 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-275pt\" (UniqueName: \"kubernetes.io/projected/1e9c3b23-6678-42de-923a-27ebb5fb61a3-kube-api-access-275pt\") pod \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\" (UID: \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\") " Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.063669 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hllsb\" (UniqueName: \"kubernetes.io/projected/2121970a-59f7-4943-ac1e-fa675e5eef8e-kube-api-access-hllsb\") pod \"2121970a-59f7-4943-ac1e-fa675e5eef8e\" (UID: \"2121970a-59f7-4943-ac1e-fa675e5eef8e\") " Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.063807 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2121970a-59f7-4943-ac1e-fa675e5eef8e-operator-scripts\") pod \"2121970a-59f7-4943-ac1e-fa675e5eef8e\" (UID: \"2121970a-59f7-4943-ac1e-fa675e5eef8e\") " Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.064522 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2121970a-59f7-4943-ac1e-fa675e5eef8e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2121970a-59f7-4943-ac1e-fa675e5eef8e" (UID: "2121970a-59f7-4943-ac1e-fa675e5eef8e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.064523 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e9c3b23-6678-42de-923a-27ebb5fb61a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1e9c3b23-6678-42de-923a-27ebb5fb61a3" (UID: "1e9c3b23-6678-42de-923a-27ebb5fb61a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.068935 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e9c3b23-6678-42de-923a-27ebb5fb61a3-kube-api-access-275pt" (OuterVolumeSpecName: "kube-api-access-275pt") pod "1e9c3b23-6678-42de-923a-27ebb5fb61a3" (UID: "1e9c3b23-6678-42de-923a-27ebb5fb61a3"). InnerVolumeSpecName "kube-api-access-275pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.072648 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2121970a-59f7-4943-ac1e-fa675e5eef8e-kube-api-access-hllsb" (OuterVolumeSpecName: "kube-api-access-hllsb") pod "2121970a-59f7-4943-ac1e-fa675e5eef8e" (UID: "2121970a-59f7-4943-ac1e-fa675e5eef8e"). InnerVolumeSpecName "kube-api-access-hllsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.165215 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-275pt\" (UniqueName: \"kubernetes.io/projected/1e9c3b23-6678-42de-923a-27ebb5fb61a3-kube-api-access-275pt\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.165251 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hllsb\" (UniqueName: \"kubernetes.io/projected/2121970a-59f7-4943-ac1e-fa675e5eef8e-kube-api-access-hllsb\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.165260 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2121970a-59f7-4943-ac1e-fa675e5eef8e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.165269 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e9c3b23-6678-42de-923a-27ebb5fb61a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.436984 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-72be-account-create-update-gphrs" event={"ID":"2121970a-59f7-4943-ac1e-fa675e5eef8e","Type":"ContainerDied","Data":"145e4e1ba4d4aa1093bcc2768b2faa2eacc47565152f77014a3c9ea2fe4efec5"} Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.437310 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="145e4e1ba4d4aa1093bcc2768b2faa2eacc47565152f77014a3c9ea2fe4efec5" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.437010 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.439817 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50ef-account-create-update-bskhv" event={"ID":"1e9c3b23-6678-42de-923a-27ebb5fb61a3","Type":"ContainerDied","Data":"c65be4ec5473b830e36189ebc74b7fa4306eae2315b26780268f290367ce6c90"} Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.439867 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c65be4ec5473b830e36189ebc74b7fa4306eae2315b26780268f290367ce6c90" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.439882 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.490027 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-bg2b8" podUID="78294966-2fbd-4ed5-8d2a-2096ac07dac1" containerName="ovn-controller" probeResult="failure" output=< Jan 30 08:48:36 crc kubenswrapper[4758]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 08:48:36 crc kubenswrapper[4758]: > Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.590851 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.616548 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855173 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bg2b8-config-s84jw"] Jan 30 08:48:36 crc kubenswrapper[4758]: E0130 08:48:36.855512 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418d35e8-ad4e-4051-a0f1-fc9179300441" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855524 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="418d35e8-ad4e-4051-a0f1-fc9179300441" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: E0130 08:48:36.855540 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04121a4-3ab4-48c5-903e-8d0002771ff7" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855546 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04121a4-3ab4-48c5-903e-8d0002771ff7" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: E0130 08:48:36.855568 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f506e0d-da4f-4243-923b-f7e102fafd92" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855574 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f506e0d-da4f-4243-923b-f7e102fafd92" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: E0130 08:48:36.855582 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e9c3b23-6678-42de-923a-27ebb5fb61a3" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855587 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e9c3b23-6678-42de-923a-27ebb5fb61a3" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: E0130 08:48:36.855600 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585a6b3e-695a-4ddc-91a6-9b39b241ffd0" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855606 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="585a6b3e-695a-4ddc-91a6-9b39b241ffd0" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: E0130 08:48:36.855618 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b3ba93-ce04-4374-8fed-db25eb3b3065" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855623 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b3ba93-ce04-4374-8fed-db25eb3b3065" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: E0130 08:48:36.855637 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2121970a-59f7-4943-ac1e-fa675e5eef8e" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855642 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2121970a-59f7-4943-ac1e-fa675e5eef8e" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855789 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="2121970a-59f7-4943-ac1e-fa675e5eef8e" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855800 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04121a4-3ab4-48c5-903e-8d0002771ff7" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855814 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="585a6b3e-695a-4ddc-91a6-9b39b241ffd0" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855821 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f506e0d-da4f-4243-923b-f7e102fafd92" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855829 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e9c3b23-6678-42de-923a-27ebb5fb61a3" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855838 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="418d35e8-ad4e-4051-a0f1-fc9179300441" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855849 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="88b3ba93-ce04-4374-8fed-db25eb3b3065" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.856357 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.862138 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bg2b8-config-s84jw"] Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.862441 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.980535 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.980601 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run-ovn\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.980637 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-log-ovn\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.980660 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-scripts\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.980709 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-additional-scripts\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.980733 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxjq8\" (UniqueName: \"kubernetes.io/projected/1d179b42-bac4-40cb-afea-af27020a2b51-kube-api-access-xxjq8\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.082346 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.082403 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run-ovn\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.082441 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-log-ovn\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.082463 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-scripts\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.082503 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-additional-scripts\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.082519 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxjq8\" (UniqueName: \"kubernetes.io/projected/1d179b42-bac4-40cb-afea-af27020a2b51-kube-api-access-xxjq8\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.083009 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run-ovn\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.083011 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-log-ovn\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.083011 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.083559 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-additional-scripts\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.085017 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-scripts\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.100856 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxjq8\" (UniqueName: \"kubernetes.io/projected/1d179b42-bac4-40cb-afea-af27020a2b51-kube-api-access-xxjq8\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.190845 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.871427 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bg2b8-config-s84jw"] Jan 30 08:48:38 crc kubenswrapper[4758]: I0130 08:48:38.507418 4758 generic.go:334] "Generic (PLEG): container finished" podID="1d179b42-bac4-40cb-afea-af27020a2b51" containerID="44986833bc4a6d26e1f0961e4b7a2ef317d50d30ae747ee1fe73659d1eb85ff3" exitCode=0 Jan 30 08:48:38 crc kubenswrapper[4758]: I0130 08:48:38.507464 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bg2b8-config-s84jw" event={"ID":"1d179b42-bac4-40cb-afea-af27020a2b51","Type":"ContainerDied","Data":"44986833bc4a6d26e1f0961e4b7a2ef317d50d30ae747ee1fe73659d1eb85ff3"} Jan 30 08:48:38 crc kubenswrapper[4758]: I0130 08:48:38.507488 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bg2b8-config-s84jw" event={"ID":"1d179b42-bac4-40cb-afea-af27020a2b51","Type":"ContainerStarted","Data":"64b3554d747e64fa29aa5f7830b4267e45eb5c0b76b56d2c4def2902f3ea79e9"} Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.197303 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5vdm5"] Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.198424 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.200093 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.211462 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5vdm5"] Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.247511 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96jjq\" (UniqueName: \"kubernetes.io/projected/78535c40-59db-4a0e-bcb9-4bae7e92548c-kube-api-access-96jjq\") pod \"root-account-create-update-5vdm5\" (UID: \"78535c40-59db-4a0e-bcb9-4bae7e92548c\") " pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.247569 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78535c40-59db-4a0e-bcb9-4bae7e92548c-operator-scripts\") pod \"root-account-create-update-5vdm5\" (UID: \"78535c40-59db-4a0e-bcb9-4bae7e92548c\") " pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.349517 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96jjq\" (UniqueName: \"kubernetes.io/projected/78535c40-59db-4a0e-bcb9-4bae7e92548c-kube-api-access-96jjq\") pod \"root-account-create-update-5vdm5\" (UID: \"78535c40-59db-4a0e-bcb9-4bae7e92548c\") " pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.349580 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78535c40-59db-4a0e-bcb9-4bae7e92548c-operator-scripts\") pod \"root-account-create-update-5vdm5\" (UID: \"78535c40-59db-4a0e-bcb9-4bae7e92548c\") " pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.350433 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78535c40-59db-4a0e-bcb9-4bae7e92548c-operator-scripts\") pod \"root-account-create-update-5vdm5\" (UID: \"78535c40-59db-4a0e-bcb9-4bae7e92548c\") " pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.374853 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96jjq\" (UniqueName: \"kubernetes.io/projected/78535c40-59db-4a0e-bcb9-4bae7e92548c-kube-api-access-96jjq\") pod \"root-account-create-update-5vdm5\" (UID: \"78535c40-59db-4a0e-bcb9-4bae7e92548c\") " pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.525271 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.936467 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.060669 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run\") pod \"1d179b42-bac4-40cb-afea-af27020a2b51\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.060783 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxjq8\" (UniqueName: \"kubernetes.io/projected/1d179b42-bac4-40cb-afea-af27020a2b51-kube-api-access-xxjq8\") pod \"1d179b42-bac4-40cb-afea-af27020a2b51\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.060828 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-log-ovn\") pod \"1d179b42-bac4-40cb-afea-af27020a2b51\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.060883 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run-ovn\") pod \"1d179b42-bac4-40cb-afea-af27020a2b51\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.060926 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-additional-scripts\") pod \"1d179b42-bac4-40cb-afea-af27020a2b51\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.060959 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-scripts\") pod \"1d179b42-bac4-40cb-afea-af27020a2b51\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.062324 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "1d179b42-bac4-40cb-afea-af27020a2b51" (UID: "1d179b42-bac4-40cb-afea-af27020a2b51"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.062350 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run" (OuterVolumeSpecName: "var-run") pod "1d179b42-bac4-40cb-afea-af27020a2b51" (UID: "1d179b42-bac4-40cb-afea-af27020a2b51"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.062589 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "1d179b42-bac4-40cb-afea-af27020a2b51" (UID: "1d179b42-bac4-40cb-afea-af27020a2b51"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.062917 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-scripts" (OuterVolumeSpecName: "scripts") pod "1d179b42-bac4-40cb-afea-af27020a2b51" (UID: "1d179b42-bac4-40cb-afea-af27020a2b51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.062972 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "1d179b42-bac4-40cb-afea-af27020a2b51" (UID: "1d179b42-bac4-40cb-afea-af27020a2b51"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.071420 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d179b42-bac4-40cb-afea-af27020a2b51-kube-api-access-xxjq8" (OuterVolumeSpecName: "kube-api-access-xxjq8") pod "1d179b42-bac4-40cb-afea-af27020a2b51" (UID: "1d179b42-bac4-40cb-afea-af27020a2b51"). InnerVolumeSpecName "kube-api-access-xxjq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.087886 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5vdm5"] Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.162918 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxjq8\" (UniqueName: \"kubernetes.io/projected/1d179b42-bac4-40cb-afea-af27020a2b51-kube-api-access-xxjq8\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.162945 4758 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.162955 4758 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.162976 4758 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.162985 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.162992 4758 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.523330 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bg2b8-config-s84jw" event={"ID":"1d179b42-bac4-40cb-afea-af27020a2b51","Type":"ContainerDied","Data":"64b3554d747e64fa29aa5f7830b4267e45eb5c0b76b56d2c4def2902f3ea79e9"} Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.523716 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64b3554d747e64fa29aa5f7830b4267e45eb5c0b76b56d2c4def2902f3ea79e9" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.523363 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.525865 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5vdm5" event={"ID":"78535c40-59db-4a0e-bcb9-4bae7e92548c","Type":"ContainerDied","Data":"defa08629f2c819116b155489e34ad3fab9a107ee7bcfc7941be06048d203e56"} Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.525715 4758 generic.go:334] "Generic (PLEG): container finished" podID="78535c40-59db-4a0e-bcb9-4bae7e92548c" containerID="defa08629f2c819116b155489e34ad3fab9a107ee7bcfc7941be06048d203e56" exitCode=0 Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.526424 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5vdm5" event={"ID":"78535c40-59db-4a0e-bcb9-4bae7e92548c","Type":"ContainerStarted","Data":"4be144c8044df6977860b5c54c90d9e992a7b9f80f15e2e44a617553dddd6d06"} Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.047394 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-bg2b8-config-s84jw"] Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.066711 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-bg2b8-config-s84jw"] Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.118169 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bg2b8-config-v28g5"] Jan 30 08:48:41 crc kubenswrapper[4758]: E0130 08:48:41.120789 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d179b42-bac4-40cb-afea-af27020a2b51" containerName="ovn-config" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.120954 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d179b42-bac4-40cb-afea-af27020a2b51" containerName="ovn-config" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.121253 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d179b42-bac4-40cb-afea-af27020a2b51" containerName="ovn-config" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.123006 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.127637 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.130490 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bg2b8-config-v28g5"] Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.195796 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-additional-scripts\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.196652 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-scripts\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.196775 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run-ovn\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.196947 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-log-ovn\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.197237 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.199387 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k58d9\" (UniqueName: \"kubernetes.io/projected/7558428a-6aa5-454c-b401-d921918c8239-kube-api-access-k58d9\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.301358 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run-ovn\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.301446 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-log-ovn\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.301512 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.301572 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k58d9\" (UniqueName: \"kubernetes.io/projected/7558428a-6aa5-454c-b401-d921918c8239-kube-api-access-k58d9\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.301637 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-additional-scripts\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.301662 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-scripts\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.302230 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run-ovn\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.302282 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-log-ovn\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.302369 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.302626 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-additional-scripts\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.303796 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-scripts\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.321910 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k58d9\" (UniqueName: \"kubernetes.io/projected/7558428a-6aa5-454c-b401-d921918c8239-kube-api-access-k58d9\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.382887 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kc8cg"] Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.384012 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.386806 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.386840 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wpw9v" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.393079 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kc8cg"] Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.403303 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-db-sync-config-data\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.403402 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-config-data\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.403468 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7nc4\" (UniqueName: \"kubernetes.io/projected/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-kube-api-access-g7nc4\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.403554 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-combined-ca-bundle\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.488476 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.506461 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-db-sync-config-data\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.506563 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-config-data\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.506617 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7nc4\" (UniqueName: \"kubernetes.io/projected/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-kube-api-access-g7nc4\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.506681 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-combined-ca-bundle\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.512055 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-combined-ca-bundle\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.512994 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-config-data\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.533142 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-db-sync-config-data\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.553599 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7nc4\" (UniqueName: \"kubernetes.io/projected/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-kube-api-access-g7nc4\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.704948 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.875592 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d179b42-bac4-40cb-afea-af27020a2b51" path="/var/lib/kubelet/pods/1d179b42-bac4-40cb-afea-af27020a2b51/volumes" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.876503 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-bg2b8" Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.193843 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.231991 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96jjq\" (UniqueName: \"kubernetes.io/projected/78535c40-59db-4a0e-bcb9-4bae7e92548c-kube-api-access-96jjq\") pod \"78535c40-59db-4a0e-bcb9-4bae7e92548c\" (UID: \"78535c40-59db-4a0e-bcb9-4bae7e92548c\") " Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.232066 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78535c40-59db-4a0e-bcb9-4bae7e92548c-operator-scripts\") pod \"78535c40-59db-4a0e-bcb9-4bae7e92548c\" (UID: \"78535c40-59db-4a0e-bcb9-4bae7e92548c\") " Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.234435 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78535c40-59db-4a0e-bcb9-4bae7e92548c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78535c40-59db-4a0e-bcb9-4bae7e92548c" (UID: "78535c40-59db-4a0e-bcb9-4bae7e92548c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.238121 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78535c40-59db-4a0e-bcb9-4bae7e92548c-kube-api-access-96jjq" (OuterVolumeSpecName: "kube-api-access-96jjq") pod "78535c40-59db-4a0e-bcb9-4bae7e92548c" (UID: "78535c40-59db-4a0e-bcb9-4bae7e92548c"). InnerVolumeSpecName "kube-api-access-96jjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.333880 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96jjq\" (UniqueName: \"kubernetes.io/projected/78535c40-59db-4a0e-bcb9-4bae7e92548c-kube-api-access-96jjq\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.333923 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78535c40-59db-4a0e-bcb9-4bae7e92548c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.340149 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bg2b8-config-v28g5"] Jan 30 08:48:42 crc kubenswrapper[4758]: W0130 08:48:42.343119 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7558428a_6aa5_454c_b401_d921918c8239.slice/crio-9c0f84ac2c8bdae619adbfe269cb5369aa97309d8e6c0d8dd51820f02b3f24ba WatchSource:0}: Error finding container 9c0f84ac2c8bdae619adbfe269cb5369aa97309d8e6c0d8dd51820f02b3f24ba: Status 404 returned error can't find the container with id 9c0f84ac2c8bdae619adbfe269cb5369aa97309d8e6c0d8dd51820f02b3f24ba Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.565379 4758 generic.go:334] "Generic (PLEG): container finished" podID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerID="858a4cc294f2673581b5056b6b3f2795b013fb5990368406beb8a506660b666f" exitCode=0 Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.565653 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5","Type":"ContainerDied","Data":"858a4cc294f2673581b5056b6b3f2795b013fb5990368406beb8a506660b666f"} Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.573772 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5vdm5" event={"ID":"78535c40-59db-4a0e-bcb9-4bae7e92548c","Type":"ContainerDied","Data":"4be144c8044df6977860b5c54c90d9e992a7b9f80f15e2e44a617553dddd6d06"} Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.573815 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4be144c8044df6977860b5c54c90d9e992a7b9f80f15e2e44a617553dddd6d06" Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.573872 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.576002 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kc8cg"] Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.579744 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bg2b8-config-v28g5" event={"ID":"7558428a-6aa5-454c-b401-d921918c8239","Type":"ContainerStarted","Data":"9c0f84ac2c8bdae619adbfe269cb5369aa97309d8e6c0d8dd51820f02b3f24ba"} Jan 30 08:48:42 crc kubenswrapper[4758]: W0130 08:48:42.586335 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2e127b5_11e2_40a6_8389_a9d08b8cae4f.slice/crio-32a7ffa868b27537045370b7000a548e24a7b459460ee6419c99353ed8b5d949 WatchSource:0}: Error finding container 32a7ffa868b27537045370b7000a548e24a7b459460ee6419c99353ed8b5d949: Status 404 returned error can't find the container with id 32a7ffa868b27537045370b7000a548e24a7b459460ee6419c99353ed8b5d949 Jan 30 08:48:43 crc kubenswrapper[4758]: I0130 08:48:43.591348 4758 generic.go:334] "Generic (PLEG): container finished" podID="7558428a-6aa5-454c-b401-d921918c8239" containerID="86c33d040e9741a4c414c369434d352975d29a3e661c9ee51459ded8965dad1b" exitCode=0 Jan 30 08:48:43 crc kubenswrapper[4758]: I0130 08:48:43.591399 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bg2b8-config-v28g5" event={"ID":"7558428a-6aa5-454c-b401-d921918c8239","Type":"ContainerDied","Data":"86c33d040e9741a4c414c369434d352975d29a3e661c9ee51459ded8965dad1b"} Jan 30 08:48:43 crc kubenswrapper[4758]: I0130 08:48:43.595699 4758 generic.go:334] "Generic (PLEG): container finished" podID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerID="80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293" exitCode=0 Jan 30 08:48:43 crc kubenswrapper[4758]: I0130 08:48:43.595773 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89ff2fc5-609f-4ca7-b997-9f8adfa5a221","Type":"ContainerDied","Data":"80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293"} Jan 30 08:48:43 crc kubenswrapper[4758]: I0130 08:48:43.605416 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kc8cg" event={"ID":"d2e127b5-11e2-40a6-8389-a9d08b8cae4f","Type":"ContainerStarted","Data":"32a7ffa868b27537045370b7000a548e24a7b459460ee6419c99353ed8b5d949"} Jan 30 08:48:43 crc kubenswrapper[4758]: I0130 08:48:43.612682 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5","Type":"ContainerStarted","Data":"cb775ec2ee99ffed411c83db7c8c8f39801fd5654096db2c76a7443041b48ca9"} Jan 30 08:48:43 crc kubenswrapper[4758]: I0130 08:48:43.614802 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:48:43 crc kubenswrapper[4758]: I0130 08:48:43.692124 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.454834214 podStartE2EDuration="1m17.692097769s" podCreationTimestamp="2026-01-30 08:47:26 +0000 UTC" firstStartedPulling="2026-01-30 08:47:29.110229854 +0000 UTC m=+1054.082541405" lastFinishedPulling="2026-01-30 08:48:09.347493409 +0000 UTC m=+1094.319804960" observedRunningTime="2026-01-30 08:48:43.676901126 +0000 UTC m=+1128.649212687" watchObservedRunningTime="2026-01-30 08:48:43.692097769 +0000 UTC m=+1128.664409320" Jan 30 08:48:44 crc kubenswrapper[4758]: I0130 08:48:44.626323 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89ff2fc5-609f-4ca7-b997-9f8adfa5a221","Type":"ContainerStarted","Data":"dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba"} Jan 30 08:48:44 crc kubenswrapper[4758]: I0130 08:48:44.627339 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 08:48:44 crc kubenswrapper[4758]: I0130 08:48:44.647432 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371958.207357 podStartE2EDuration="1m18.647417638s" podCreationTimestamp="2026-01-30 08:47:26 +0000 UTC" firstStartedPulling="2026-01-30 08:47:28.856548099 +0000 UTC m=+1053.828859660" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:48:44.645912421 +0000 UTC m=+1129.618223982" watchObservedRunningTime="2026-01-30 08:48:44.647417638 +0000 UTC m=+1129.619729189" Jan 30 08:48:44 crc kubenswrapper[4758]: I0130 08:48:44.974253 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.095349 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-additional-scripts\") pod \"7558428a-6aa5-454c-b401-d921918c8239\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.095446 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run-ovn\") pod \"7558428a-6aa5-454c-b401-d921918c8239\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.095471 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-log-ovn\") pod \"7558428a-6aa5-454c-b401-d921918c8239\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.095546 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k58d9\" (UniqueName: \"kubernetes.io/projected/7558428a-6aa5-454c-b401-d921918c8239-kube-api-access-k58d9\") pod \"7558428a-6aa5-454c-b401-d921918c8239\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.095624 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-scripts\") pod \"7558428a-6aa5-454c-b401-d921918c8239\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.095658 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run\") pod \"7558428a-6aa5-454c-b401-d921918c8239\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.096116 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run" (OuterVolumeSpecName: "var-run") pod "7558428a-6aa5-454c-b401-d921918c8239" (UID: "7558428a-6aa5-454c-b401-d921918c8239"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.096821 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7558428a-6aa5-454c-b401-d921918c8239" (UID: "7558428a-6aa5-454c-b401-d921918c8239"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.096861 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7558428a-6aa5-454c-b401-d921918c8239" (UID: "7558428a-6aa5-454c-b401-d921918c8239"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.096881 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7558428a-6aa5-454c-b401-d921918c8239" (UID: "7558428a-6aa5-454c-b401-d921918c8239"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.098467 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-scripts" (OuterVolumeSpecName: "scripts") pod "7558428a-6aa5-454c-b401-d921918c8239" (UID: "7558428a-6aa5-454c-b401-d921918c8239"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.102138 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7558428a-6aa5-454c-b401-d921918c8239-kube-api-access-k58d9" (OuterVolumeSpecName: "kube-api-access-k58d9") pod "7558428a-6aa5-454c-b401-d921918c8239" (UID: "7558428a-6aa5-454c-b401-d921918c8239"). InnerVolumeSpecName "kube-api-access-k58d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.198206 4758 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.198247 4758 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.198262 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k58d9\" (UniqueName: \"kubernetes.io/projected/7558428a-6aa5-454c-b401-d921918c8239-kube-api-access-k58d9\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.198274 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.198285 4758 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.198297 4758 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.633684 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bg2b8-config-v28g5" event={"ID":"7558428a-6aa5-454c-b401-d921918c8239","Type":"ContainerDied","Data":"9c0f84ac2c8bdae619adbfe269cb5369aa97309d8e6c0d8dd51820f02b3f24ba"} Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.633750 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c0f84ac2c8bdae619adbfe269cb5369aa97309d8e6c0d8dd51820f02b3f24ba" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.633912 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.913755 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:45 crc kubenswrapper[4758]: E0130 08:48:45.914587 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:48:45 crc kubenswrapper[4758]: E0130 08:48:45.914608 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:48:45 crc kubenswrapper[4758]: E0130 08:48:45.914655 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:49:17.914637901 +0000 UTC m=+1162.886949452 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:48:46 crc kubenswrapper[4758]: I0130 08:48:46.092444 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-bg2b8-config-v28g5"] Jan 30 08:48:46 crc kubenswrapper[4758]: I0130 08:48:46.102212 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-bg2b8-config-v28g5"] Jan 30 08:48:47 crc kubenswrapper[4758]: I0130 08:48:47.781911 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7558428a-6aa5-454c-b401-d921918c8239" path="/var/lib/kubelet/pods/7558428a-6aa5-454c-b401-d921918c8239/volumes" Jan 30 08:48:52 crc kubenswrapper[4758]: I0130 08:48:52.387904 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:48:52 crc kubenswrapper[4758]: I0130 08:48:52.388251 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:48:52 crc kubenswrapper[4758]: I0130 08:48:52.388293 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:48:52 crc kubenswrapper[4758]: I0130 08:48:52.388931 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30f01f1e437d64a8924bada24b4f475f6ae60692a25492cb418ecb9bb3c281c2"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:48:52 crc kubenswrapper[4758]: I0130 08:48:52.388974 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://30f01f1e437d64a8924bada24b4f475f6ae60692a25492cb418ecb9bb3c281c2" gracePeriod=600 Jan 30 08:48:52 crc kubenswrapper[4758]: I0130 08:48:52.700810 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="30f01f1e437d64a8924bada24b4f475f6ae60692a25492cb418ecb9bb3c281c2" exitCode=0 Jan 30 08:48:52 crc kubenswrapper[4758]: I0130 08:48:52.700868 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"30f01f1e437d64a8924bada24b4f475f6ae60692a25492cb418ecb9bb3c281c2"} Jan 30 08:48:52 crc kubenswrapper[4758]: I0130 08:48:52.700911 4758 scope.go:117] "RemoveContainer" containerID="69b903b761c0949dbe1210bdef2b095568ae802b78c69042a2686b209bdd29e6" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.109310 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.285279 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.772767 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"b01f58947342357281d9de34ab8ce6d30a071f097fc6b75d76a38795a72373c2"} Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.902127 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-cgr7c"] Jan 30 08:48:58 crc kubenswrapper[4758]: E0130 08:48:58.902514 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78535c40-59db-4a0e-bcb9-4bae7e92548c" containerName="mariadb-account-create-update" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.902534 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="78535c40-59db-4a0e-bcb9-4bae7e92548c" containerName="mariadb-account-create-update" Jan 30 08:48:58 crc kubenswrapper[4758]: E0130 08:48:58.902560 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7558428a-6aa5-454c-b401-d921918c8239" containerName="ovn-config" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.902570 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7558428a-6aa5-454c-b401-d921918c8239" containerName="ovn-config" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.902764 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="78535c40-59db-4a0e-bcb9-4bae7e92548c" containerName="mariadb-account-create-update" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.902800 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7558428a-6aa5-454c-b401-d921918c8239" containerName="ovn-config" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.903338 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cgr7c" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.974750 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b87c6fc-6491-419a-96bb-9542edb1e8aa-operator-scripts\") pod \"cinder-db-create-cgr7c\" (UID: \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\") " pod="openstack/cinder-db-create-cgr7c" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.974810 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnzdm\" (UniqueName: \"kubernetes.io/projected/6b87c6fc-6491-419a-96bb-9542edb1e8aa-kube-api-access-bnzdm\") pod \"cinder-db-create-cgr7c\" (UID: \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\") " pod="openstack/cinder-db-create-cgr7c" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.982667 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cgr7c"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.076856 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b87c6fc-6491-419a-96bb-9542edb1e8aa-operator-scripts\") pod \"cinder-db-create-cgr7c\" (UID: \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\") " pod="openstack/cinder-db-create-cgr7c" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.076922 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnzdm\" (UniqueName: \"kubernetes.io/projected/6b87c6fc-6491-419a-96bb-9542edb1e8aa-kube-api-access-bnzdm\") pod \"cinder-db-create-cgr7c\" (UID: \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\") " pod="openstack/cinder-db-create-cgr7c" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.078261 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b87c6fc-6491-419a-96bb-9542edb1e8aa-operator-scripts\") pod \"cinder-db-create-cgr7c\" (UID: \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\") " pod="openstack/cinder-db-create-cgr7c" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.106395 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-feed-account-create-update-qcmp4"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.107554 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.110765 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.131013 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnzdm\" (UniqueName: \"kubernetes.io/projected/6b87c6fc-6491-419a-96bb-9542edb1e8aa-kube-api-access-bnzdm\") pod \"cinder-db-create-cgr7c\" (UID: \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\") " pod="openstack/cinder-db-create-cgr7c" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.133015 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-feed-account-create-update-qcmp4"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.178210 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58afb54a-7b57-42bc-af7c-13db0bfd1580-operator-scripts\") pod \"cinder-feed-account-create-update-qcmp4\" (UID: \"58afb54a-7b57-42bc-af7c-13db0bfd1580\") " pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.178301 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqh72\" (UniqueName: \"kubernetes.io/projected/58afb54a-7b57-42bc-af7c-13db0bfd1580-kube-api-access-kqh72\") pod \"cinder-feed-account-create-update-qcmp4\" (UID: \"58afb54a-7b57-42bc-af7c-13db0bfd1580\") " pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.224240 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cgr7c" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.279468 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58afb54a-7b57-42bc-af7c-13db0bfd1580-operator-scripts\") pod \"cinder-feed-account-create-update-qcmp4\" (UID: \"58afb54a-7b57-42bc-af7c-13db0bfd1580\") " pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.279545 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqh72\" (UniqueName: \"kubernetes.io/projected/58afb54a-7b57-42bc-af7c-13db0bfd1580-kube-api-access-kqh72\") pod \"cinder-feed-account-create-update-qcmp4\" (UID: \"58afb54a-7b57-42bc-af7c-13db0bfd1580\") " pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.280406 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58afb54a-7b57-42bc-af7c-13db0bfd1580-operator-scripts\") pod \"cinder-feed-account-create-update-qcmp4\" (UID: \"58afb54a-7b57-42bc-af7c-13db0bfd1580\") " pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.313729 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqh72\" (UniqueName: \"kubernetes.io/projected/58afb54a-7b57-42bc-af7c-13db0bfd1580-kube-api-access-kqh72\") pod \"cinder-feed-account-create-update-qcmp4\" (UID: \"58afb54a-7b57-42bc-af7c-13db0bfd1580\") " pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.386013 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-bstt2"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.387223 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bstt2" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.422324 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-495a-account-create-update-k2qtr"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.423425 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.427896 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bstt2"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.427990 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.448938 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-495a-account-create-update-k2qtr"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.484000 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz2tl\" (UniqueName: \"kubernetes.io/projected/8a01921f-9f43-471a-bbd3-0a7e9bab364e-kube-api-access-zz2tl\") pod \"barbican-db-create-bstt2\" (UID: \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\") " pod="openstack/barbican-db-create-bstt2" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.484195 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lzfr\" (UniqueName: \"kubernetes.io/projected/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-kube-api-access-2lzfr\") pod \"barbican-495a-account-create-update-k2qtr\" (UID: \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\") " pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.484280 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a01921f-9f43-471a-bbd3-0a7e9bab364e-operator-scripts\") pod \"barbican-db-create-bstt2\" (UID: \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\") " pod="openstack/barbican-db-create-bstt2" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.484321 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-operator-scripts\") pod \"barbican-495a-account-create-update-k2qtr\" (UID: \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\") " pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.493441 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.589878 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz2tl\" (UniqueName: \"kubernetes.io/projected/8a01921f-9f43-471a-bbd3-0a7e9bab364e-kube-api-access-zz2tl\") pod \"barbican-db-create-bstt2\" (UID: \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\") " pod="openstack/barbican-db-create-bstt2" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.590200 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lzfr\" (UniqueName: \"kubernetes.io/projected/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-kube-api-access-2lzfr\") pod \"barbican-495a-account-create-update-k2qtr\" (UID: \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\") " pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.590293 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a01921f-9f43-471a-bbd3-0a7e9bab364e-operator-scripts\") pod \"barbican-db-create-bstt2\" (UID: \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\") " pod="openstack/barbican-db-create-bstt2" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.590367 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-operator-scripts\") pod \"barbican-495a-account-create-update-k2qtr\" (UID: \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\") " pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.591286 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-operator-scripts\") pod \"barbican-495a-account-create-update-k2qtr\" (UID: \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\") " pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.591957 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a01921f-9f43-471a-bbd3-0a7e9bab364e-operator-scripts\") pod \"barbican-db-create-bstt2\" (UID: \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\") " pod="openstack/barbican-db-create-bstt2" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.604356 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-2rzkq"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.605938 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2rzkq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.636146 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2rzkq"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.679600 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lzfr\" (UniqueName: \"kubernetes.io/projected/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-kube-api-access-2lzfr\") pod \"barbican-495a-account-create-update-k2qtr\" (UID: \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\") " pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.682484 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz2tl\" (UniqueName: \"kubernetes.io/projected/8a01921f-9f43-471a-bbd3-0a7e9bab364e-kube-api-access-zz2tl\") pod \"barbican-db-create-bstt2\" (UID: \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\") " pod="openstack/barbican-db-create-bstt2" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.692928 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g58pq\" (UniqueName: \"kubernetes.io/projected/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-kube-api-access-g58pq\") pod \"neutron-db-create-2rzkq\" (UID: \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\") " pod="openstack/neutron-db-create-2rzkq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.693216 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-operator-scripts\") pod \"neutron-db-create-2rzkq\" (UID: \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\") " pod="openstack/neutron-db-create-2rzkq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.704682 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bstt2" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.749161 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-6cqnq"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.750707 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.763059 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.763280 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w5f5m" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.766011 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.766216 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.767362 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.799170 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmnbx\" (UniqueName: \"kubernetes.io/projected/5665f568-1d3e-48dc-8a32-7bd9ad02a037-kube-api-access-gmnbx\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.799929 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g58pq\" (UniqueName: \"kubernetes.io/projected/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-kube-api-access-g58pq\") pod \"neutron-db-create-2rzkq\" (UID: \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\") " pod="openstack/neutron-db-create-2rzkq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.799980 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-combined-ca-bundle\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.800021 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-config-data\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.800095 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-operator-scripts\") pod \"neutron-db-create-2rzkq\" (UID: \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\") " pod="openstack/neutron-db-create-2rzkq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.800930 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-operator-scripts\") pod \"neutron-db-create-2rzkq\" (UID: \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\") " pod="openstack/neutron-db-create-2rzkq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.808284 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6cqnq"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.839267 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kc8cg" event={"ID":"d2e127b5-11e2-40a6-8389-a9d08b8cae4f","Type":"ContainerStarted","Data":"2906f8c6c80f27b1b5d6a346bfa1c2bccd0d8111cf2311c1ea9793ee001e6172"} Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.850109 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g58pq\" (UniqueName: \"kubernetes.io/projected/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-kube-api-access-g58pq\") pod \"neutron-db-create-2rzkq\" (UID: \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\") " pod="openstack/neutron-db-create-2rzkq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.862535 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5131-account-create-update-tbcx5"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.863968 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.866482 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.876675 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5131-account-create-update-tbcx5"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.881945 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kc8cg" podStartSLOduration=3.454730715 podStartE2EDuration="18.881924712s" podCreationTimestamp="2026-01-30 08:48:41 +0000 UTC" firstStartedPulling="2026-01-30 08:48:42.58855767 +0000 UTC m=+1127.560869221" lastFinishedPulling="2026-01-30 08:48:58.015751667 +0000 UTC m=+1142.988063218" observedRunningTime="2026-01-30 08:48:59.864695026 +0000 UTC m=+1144.837006577" watchObservedRunningTime="2026-01-30 08:48:59.881924712 +0000 UTC m=+1144.854236263" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.905564 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-combined-ca-bundle\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.905676 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-config-data\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.905714 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c85c22cc-99a7-4939-904c-bffa8f2d5457-operator-scripts\") pod \"neutron-5131-account-create-update-tbcx5\" (UID: \"c85c22cc-99a7-4939-904c-bffa8f2d5457\") " pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.905778 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pdmh\" (UniqueName: \"kubernetes.io/projected/c85c22cc-99a7-4939-904c-bffa8f2d5457-kube-api-access-2pdmh\") pod \"neutron-5131-account-create-update-tbcx5\" (UID: \"c85c22cc-99a7-4939-904c-bffa8f2d5457\") " pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.905916 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmnbx\" (UniqueName: \"kubernetes.io/projected/5665f568-1d3e-48dc-8a32-7bd9ad02a037-kube-api-access-gmnbx\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.930751 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-combined-ca-bundle\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.931559 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-config-data\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.945683 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmnbx\" (UniqueName: \"kubernetes.io/projected/5665f568-1d3e-48dc-8a32-7bd9ad02a037-kube-api-access-gmnbx\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.945700 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2rzkq" Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.007089 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c85c22cc-99a7-4939-904c-bffa8f2d5457-operator-scripts\") pod \"neutron-5131-account-create-update-tbcx5\" (UID: \"c85c22cc-99a7-4939-904c-bffa8f2d5457\") " pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.007148 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pdmh\" (UniqueName: \"kubernetes.io/projected/c85c22cc-99a7-4939-904c-bffa8f2d5457-kube-api-access-2pdmh\") pod \"neutron-5131-account-create-update-tbcx5\" (UID: \"c85c22cc-99a7-4939-904c-bffa8f2d5457\") " pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.008415 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c85c22cc-99a7-4939-904c-bffa8f2d5457-operator-scripts\") pod \"neutron-5131-account-create-update-tbcx5\" (UID: \"c85c22cc-99a7-4939-904c-bffa8f2d5457\") " pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.038211 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pdmh\" (UniqueName: \"kubernetes.io/projected/c85c22cc-99a7-4939-904c-bffa8f2d5457-kube-api-access-2pdmh\") pod \"neutron-5131-account-create-update-tbcx5\" (UID: \"c85c22cc-99a7-4939-904c-bffa8f2d5457\") " pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.060274 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cgr7c"] Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.170906 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.194188 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.652585 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-feed-account-create-update-qcmp4"] Jan 30 08:49:00 crc kubenswrapper[4758]: W0130 08:49:00.659881 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58afb54a_7b57_42bc_af7c_13db0bfd1580.slice/crio-87955bc73e5d7963ec43cad569a0fb06119ece1b4213dd7b9ab6c6c4f82ec748 WatchSource:0}: Error finding container 87955bc73e5d7963ec43cad569a0fb06119ece1b4213dd7b9ab6c6c4f82ec748: Status 404 returned error can't find the container with id 87955bc73e5d7963ec43cad569a0fb06119ece1b4213dd7b9ab6c6c4f82ec748 Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.688891 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2rzkq"] Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.755408 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bstt2"] Jan 30 08:49:00 crc kubenswrapper[4758]: W0130 08:49:00.758527 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a01921f_9f43_471a_bbd3_0a7e9bab364e.slice/crio-75efd4936693bd61c471db90ec8c484ef3e49142a4aff746beba6cd676272f98 WatchSource:0}: Error finding container 75efd4936693bd61c471db90ec8c484ef3e49142a4aff746beba6cd676272f98: Status 404 returned error can't find the container with id 75efd4936693bd61c471db90ec8c484ef3e49142a4aff746beba6cd676272f98 Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.780134 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-495a-account-create-update-k2qtr"] Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.902407 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bstt2" event={"ID":"8a01921f-9f43-471a-bbd3-0a7e9bab364e","Type":"ContainerStarted","Data":"75efd4936693bd61c471db90ec8c484ef3e49142a4aff746beba6cd676272f98"} Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.926679 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-495a-account-create-update-k2qtr" event={"ID":"f4bedf0c-06c1-4eaa-b731-3b8c2438456d","Type":"ContainerStarted","Data":"98d64788b2e4146b6e0c5cd1d465c9c9a9a919896eb41c7fa3f82de022f57544"} Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.935289 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2rzkq" event={"ID":"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30","Type":"ContainerStarted","Data":"2543f597f41a34e8ce4ecb182dddc5c7e9b80aa1fe865ceffab33db59555554a"} Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.947857 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cgr7c" event={"ID":"6b87c6fc-6491-419a-96bb-9542edb1e8aa","Type":"ContainerStarted","Data":"1d16e0f46d09a180bfaf46bb185c0c0c5a4a301473761aa6a905b4411c7a3f20"} Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.947902 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cgr7c" event={"ID":"6b87c6fc-6491-419a-96bb-9542edb1e8aa","Type":"ContainerStarted","Data":"6a2b1b56efb4ec4f5e3111fa2055368ffd6b8141c1330555ede25dc0591fb33e"} Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.954092 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-feed-account-create-update-qcmp4" event={"ID":"58afb54a-7b57-42bc-af7c-13db0bfd1580","Type":"ContainerStarted","Data":"87955bc73e5d7963ec43cad569a0fb06119ece1b4213dd7b9ab6c6c4f82ec748"} Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.988273 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-cgr7c" podStartSLOduration=2.988253791 podStartE2EDuration="2.988253791s" podCreationTimestamp="2026-01-30 08:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:00.987792996 +0000 UTC m=+1145.960104577" watchObservedRunningTime="2026-01-30 08:49:00.988253791 +0000 UTC m=+1145.960565342" Jan 30 08:49:01 crc kubenswrapper[4758]: I0130 08:49:01.058144 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6cqnq"] Jan 30 08:49:01 crc kubenswrapper[4758]: W0130 08:49:01.090276 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5665f568_1d3e_48dc_8a32_7bd9ad02a037.slice/crio-d1f4dcf77cdd5aea59016fd8daa905932b1869616b357e10e2c333b4571e8106 WatchSource:0}: Error finding container d1f4dcf77cdd5aea59016fd8daa905932b1869616b357e10e2c333b4571e8106: Status 404 returned error can't find the container with id d1f4dcf77cdd5aea59016fd8daa905932b1869616b357e10e2c333b4571e8106 Jan 30 08:49:01 crc kubenswrapper[4758]: I0130 08:49:01.101194 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5131-account-create-update-tbcx5"] Jan 30 08:49:01 crc kubenswrapper[4758]: W0130 08:49:01.119653 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc85c22cc_99a7_4939_904c_bffa8f2d5457.slice/crio-ce0f4a94b7cc04e1a5dc3729a5db1288e56a94e7c845700c39031a3f59dae308 WatchSource:0}: Error finding container ce0f4a94b7cc04e1a5dc3729a5db1288e56a94e7c845700c39031a3f59dae308: Status 404 returned error can't find the container with id ce0f4a94b7cc04e1a5dc3729a5db1288e56a94e7c845700c39031a3f59dae308 Jan 30 08:49:01 crc kubenswrapper[4758]: E0130 08:49:01.647525 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b87c6fc_6491_419a_96bb_9542edb1e8aa.slice/crio-1d16e0f46d09a180bfaf46bb185c0c0c5a4a301473761aa6a905b4411c7a3f20.scope\": RecentStats: unable to find data in memory cache]" Jan 30 08:49:01 crc kubenswrapper[4758]: I0130 08:49:01.976257 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-feed-account-create-update-qcmp4" event={"ID":"58afb54a-7b57-42bc-af7c-13db0bfd1580","Type":"ContainerStarted","Data":"b2ff6cf87c18064183ff2ca83818d1799a9ee6c49eb71a91d7e28d365a9e731e"} Jan 30 08:49:01 crc kubenswrapper[4758]: I0130 08:49:01.989765 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bstt2" event={"ID":"8a01921f-9f43-471a-bbd3-0a7e9bab364e","Type":"ContainerStarted","Data":"0ae912bbbb171da1791fe8eb0cee80e9a7d55f62417065d9e913877b45490451"} Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.005643 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6cqnq" event={"ID":"5665f568-1d3e-48dc-8a32-7bd9ad02a037","Type":"ContainerStarted","Data":"d1f4dcf77cdd5aea59016fd8daa905932b1869616b357e10e2c333b4571e8106"} Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.018857 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2rzkq" event={"ID":"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30","Type":"ContainerStarted","Data":"6c4d4c33fc49af87a76aa96b41f58bb62854886d94fd3aadaddede31f4e5c0ed"} Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.024567 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-feed-account-create-update-qcmp4" podStartSLOduration=3.024534824 podStartE2EDuration="3.024534824s" podCreationTimestamp="2026-01-30 08:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:02.008530175 +0000 UTC m=+1146.980841736" watchObservedRunningTime="2026-01-30 08:49:02.024534824 +0000 UTC m=+1146.996846375" Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.026509 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-495a-account-create-update-k2qtr" event={"ID":"f4bedf0c-06c1-4eaa-b731-3b8c2438456d","Type":"ContainerStarted","Data":"de4f25edc76809fd476b4ddf2f6fed1b04afc6e58967e48dd869ed6a37fbb265"} Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.040407 4758 generic.go:334] "Generic (PLEG): container finished" podID="6b87c6fc-6491-419a-96bb-9542edb1e8aa" containerID="1d16e0f46d09a180bfaf46bb185c0c0c5a4a301473761aa6a905b4411c7a3f20" exitCode=0 Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.040541 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cgr7c" event={"ID":"6b87c6fc-6491-419a-96bb-9542edb1e8aa","Type":"ContainerDied","Data":"1d16e0f46d09a180bfaf46bb185c0c0c5a4a301473761aa6a905b4411c7a3f20"} Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.045499 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-bstt2" podStartSLOduration=3.045469849 podStartE2EDuration="3.045469849s" podCreationTimestamp="2026-01-30 08:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:02.041339738 +0000 UTC m=+1147.013651289" watchObservedRunningTime="2026-01-30 08:49:02.045469849 +0000 UTC m=+1147.017781400" Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.054009 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5131-account-create-update-tbcx5" event={"ID":"c85c22cc-99a7-4939-904c-bffa8f2d5457","Type":"ContainerStarted","Data":"edfcbb4ee1b893b2f19bb3b08ec37186fb229027e460be6e17c73cf2a49baec0"} Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.054076 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5131-account-create-update-tbcx5" event={"ID":"c85c22cc-99a7-4939-904c-bffa8f2d5457","Type":"ContainerStarted","Data":"ce0f4a94b7cc04e1a5dc3729a5db1288e56a94e7c845700c39031a3f59dae308"} Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.120784 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-495a-account-create-update-k2qtr" podStartSLOduration=3.120758152 podStartE2EDuration="3.120758152s" podCreationTimestamp="2026-01-30 08:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:02.078359544 +0000 UTC m=+1147.050671095" watchObservedRunningTime="2026-01-30 08:49:02.120758152 +0000 UTC m=+1147.093069853" Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.163822 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-2rzkq" podStartSLOduration=3.163802819 podStartE2EDuration="3.163802819s" podCreationTimestamp="2026-01-30 08:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:02.160611359 +0000 UTC m=+1147.132922910" watchObservedRunningTime="2026-01-30 08:49:02.163802819 +0000 UTC m=+1147.136114370" Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.219108 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5131-account-create-update-tbcx5" podStartSLOduration=3.219074747 podStartE2EDuration="3.219074747s" podCreationTimestamp="2026-01-30 08:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:02.206708793 +0000 UTC m=+1147.179020364" watchObservedRunningTime="2026-01-30 08:49:02.219074747 +0000 UTC m=+1147.191386308" Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.097992 4758 generic.go:334] "Generic (PLEG): container finished" podID="dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30" containerID="6c4d4c33fc49af87a76aa96b41f58bb62854886d94fd3aadaddede31f4e5c0ed" exitCode=0 Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.099746 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2rzkq" event={"ID":"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30","Type":"ContainerDied","Data":"6c4d4c33fc49af87a76aa96b41f58bb62854886d94fd3aadaddede31f4e5c0ed"} Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.622611 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cgr7c" Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.727606 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnzdm\" (UniqueName: \"kubernetes.io/projected/6b87c6fc-6491-419a-96bb-9542edb1e8aa-kube-api-access-bnzdm\") pod \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\" (UID: \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\") " Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.727680 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b87c6fc-6491-419a-96bb-9542edb1e8aa-operator-scripts\") pod \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\" (UID: \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\") " Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.728665 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b87c6fc-6491-419a-96bb-9542edb1e8aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b87c6fc-6491-419a-96bb-9542edb1e8aa" (UID: "6b87c6fc-6491-419a-96bb-9542edb1e8aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.736199 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b87c6fc-6491-419a-96bb-9542edb1e8aa-kube-api-access-bnzdm" (OuterVolumeSpecName: "kube-api-access-bnzdm") pod "6b87c6fc-6491-419a-96bb-9542edb1e8aa" (UID: "6b87c6fc-6491-419a-96bb-9542edb1e8aa"). InnerVolumeSpecName "kube-api-access-bnzdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.830569 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b87c6fc-6491-419a-96bb-9542edb1e8aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.830603 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnzdm\" (UniqueName: \"kubernetes.io/projected/6b87c6fc-6491-419a-96bb-9542edb1e8aa-kube-api-access-bnzdm\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.107006 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4bedf0c-06c1-4eaa-b731-3b8c2438456d" containerID="de4f25edc76809fd476b4ddf2f6fed1b04afc6e58967e48dd869ed6a37fbb265" exitCode=0 Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.107082 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-495a-account-create-update-k2qtr" event={"ID":"f4bedf0c-06c1-4eaa-b731-3b8c2438456d","Type":"ContainerDied","Data":"de4f25edc76809fd476b4ddf2f6fed1b04afc6e58967e48dd869ed6a37fbb265"} Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.109279 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cgr7c" event={"ID":"6b87c6fc-6491-419a-96bb-9542edb1e8aa","Type":"ContainerDied","Data":"6a2b1b56efb4ec4f5e3111fa2055368ffd6b8141c1330555ede25dc0591fb33e"} Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.109304 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a2b1b56efb4ec4f5e3111fa2055368ffd6b8141c1330555ede25dc0591fb33e" Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.109344 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cgr7c" Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.111877 4758 generic.go:334] "Generic (PLEG): container finished" podID="c85c22cc-99a7-4939-904c-bffa8f2d5457" containerID="edfcbb4ee1b893b2f19bb3b08ec37186fb229027e460be6e17c73cf2a49baec0" exitCode=0 Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.111925 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5131-account-create-update-tbcx5" event={"ID":"c85c22cc-99a7-4939-904c-bffa8f2d5457","Type":"ContainerDied","Data":"edfcbb4ee1b893b2f19bb3b08ec37186fb229027e460be6e17c73cf2a49baec0"} Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.117563 4758 generic.go:334] "Generic (PLEG): container finished" podID="58afb54a-7b57-42bc-af7c-13db0bfd1580" containerID="b2ff6cf87c18064183ff2ca83818d1799a9ee6c49eb71a91d7e28d365a9e731e" exitCode=0 Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.117642 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-feed-account-create-update-qcmp4" event={"ID":"58afb54a-7b57-42bc-af7c-13db0bfd1580","Type":"ContainerDied","Data":"b2ff6cf87c18064183ff2ca83818d1799a9ee6c49eb71a91d7e28d365a9e731e"} Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.119876 4758 generic.go:334] "Generic (PLEG): container finished" podID="8a01921f-9f43-471a-bbd3-0a7e9bab364e" containerID="0ae912bbbb171da1791fe8eb0cee80e9a7d55f62417065d9e913877b45490451" exitCode=0 Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.119940 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bstt2" event={"ID":"8a01921f-9f43-471a-bbd3-0a7e9bab364e","Type":"ContainerDied","Data":"0ae912bbbb171da1791fe8eb0cee80e9a7d55f62417065d9e913877b45490451"} Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.564263 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2rzkq" Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.646291 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g58pq\" (UniqueName: \"kubernetes.io/projected/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-kube-api-access-g58pq\") pod \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\" (UID: \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\") " Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.646363 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-operator-scripts\") pod \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\" (UID: \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\") " Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.647551 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30" (UID: "dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.658663 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-kube-api-access-g58pq" (OuterVolumeSpecName: "kube-api-access-g58pq") pod "dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30" (UID: "dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30"). InnerVolumeSpecName "kube-api-access-g58pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.749001 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g58pq\" (UniqueName: \"kubernetes.io/projected/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-kube-api-access-g58pq\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.749062 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:05 crc kubenswrapper[4758]: I0130 08:49:05.140363 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2rzkq" Jan 30 08:49:05 crc kubenswrapper[4758]: I0130 08:49:05.145701 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2rzkq" event={"ID":"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30","Type":"ContainerDied","Data":"2543f597f41a34e8ce4ecb182dddc5c7e9b80aa1fe865ceffab33db59555554a"} Jan 30 08:49:05 crc kubenswrapper[4758]: I0130 08:49:05.145740 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2543f597f41a34e8ce4ecb182dddc5c7e9b80aa1fe865ceffab33db59555554a" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.548548 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.570388 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bstt2" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.576107 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.581615 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.646256 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lzfr\" (UniqueName: \"kubernetes.io/projected/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-kube-api-access-2lzfr\") pod \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\" (UID: \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\") " Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.646374 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqh72\" (UniqueName: \"kubernetes.io/projected/58afb54a-7b57-42bc-af7c-13db0bfd1580-kube-api-access-kqh72\") pod \"58afb54a-7b57-42bc-af7c-13db0bfd1580\" (UID: \"58afb54a-7b57-42bc-af7c-13db0bfd1580\") " Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.646416 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58afb54a-7b57-42bc-af7c-13db0bfd1580-operator-scripts\") pod \"58afb54a-7b57-42bc-af7c-13db0bfd1580\" (UID: \"58afb54a-7b57-42bc-af7c-13db0bfd1580\") " Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.646546 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a01921f-9f43-471a-bbd3-0a7e9bab364e-operator-scripts\") pod \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\" (UID: \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\") " Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.646602 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-operator-scripts\") pod \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\" (UID: \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\") " Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.646652 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c85c22cc-99a7-4939-904c-bffa8f2d5457-operator-scripts\") pod \"c85c22cc-99a7-4939-904c-bffa8f2d5457\" (UID: \"c85c22cc-99a7-4939-904c-bffa8f2d5457\") " Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.646723 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz2tl\" (UniqueName: \"kubernetes.io/projected/8a01921f-9f43-471a-bbd3-0a7e9bab364e-kube-api-access-zz2tl\") pod \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\" (UID: \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\") " Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.646741 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pdmh\" (UniqueName: \"kubernetes.io/projected/c85c22cc-99a7-4939-904c-bffa8f2d5457-kube-api-access-2pdmh\") pod \"c85c22cc-99a7-4939-904c-bffa8f2d5457\" (UID: \"c85c22cc-99a7-4939-904c-bffa8f2d5457\") " Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.647143 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58afb54a-7b57-42bc-af7c-13db0bfd1580-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58afb54a-7b57-42bc-af7c-13db0bfd1580" (UID: "58afb54a-7b57-42bc-af7c-13db0bfd1580"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.647281 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c85c22cc-99a7-4939-904c-bffa8f2d5457-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c85c22cc-99a7-4939-904c-bffa8f2d5457" (UID: "c85c22cc-99a7-4939-904c-bffa8f2d5457"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.647153 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a01921f-9f43-471a-bbd3-0a7e9bab364e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a01921f-9f43-471a-bbd3-0a7e9bab364e" (UID: "8a01921f-9f43-471a-bbd3-0a7e9bab364e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.647427 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4bedf0c-06c1-4eaa-b731-3b8c2438456d" (UID: "f4bedf0c-06c1-4eaa-b731-3b8c2438456d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.647620 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.647642 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c85c22cc-99a7-4939-904c-bffa8f2d5457-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.647653 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58afb54a-7b57-42bc-af7c-13db0bfd1580-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.647663 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a01921f-9f43-471a-bbd3-0a7e9bab364e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.653264 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a01921f-9f43-471a-bbd3-0a7e9bab364e-kube-api-access-zz2tl" (OuterVolumeSpecName: "kube-api-access-zz2tl") pod "8a01921f-9f43-471a-bbd3-0a7e9bab364e" (UID: "8a01921f-9f43-471a-bbd3-0a7e9bab364e"). InnerVolumeSpecName "kube-api-access-zz2tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.653760 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58afb54a-7b57-42bc-af7c-13db0bfd1580-kube-api-access-kqh72" (OuterVolumeSpecName: "kube-api-access-kqh72") pod "58afb54a-7b57-42bc-af7c-13db0bfd1580" (UID: "58afb54a-7b57-42bc-af7c-13db0bfd1580"). InnerVolumeSpecName "kube-api-access-kqh72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.653879 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-kube-api-access-2lzfr" (OuterVolumeSpecName: "kube-api-access-2lzfr") pod "f4bedf0c-06c1-4eaa-b731-3b8c2438456d" (UID: "f4bedf0c-06c1-4eaa-b731-3b8c2438456d"). InnerVolumeSpecName "kube-api-access-2lzfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.654221 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c85c22cc-99a7-4939-904c-bffa8f2d5457-kube-api-access-2pdmh" (OuterVolumeSpecName: "kube-api-access-2pdmh") pod "c85c22cc-99a7-4939-904c-bffa8f2d5457" (UID: "c85c22cc-99a7-4939-904c-bffa8f2d5457"). InnerVolumeSpecName "kube-api-access-2pdmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.749009 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz2tl\" (UniqueName: \"kubernetes.io/projected/8a01921f-9f43-471a-bbd3-0a7e9bab364e-kube-api-access-zz2tl\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.749064 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pdmh\" (UniqueName: \"kubernetes.io/projected/c85c22cc-99a7-4939-904c-bffa8f2d5457-kube-api-access-2pdmh\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.749079 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lzfr\" (UniqueName: \"kubernetes.io/projected/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-kube-api-access-2lzfr\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.749091 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqh72\" (UniqueName: \"kubernetes.io/projected/58afb54a-7b57-42bc-af7c-13db0bfd1580-kube-api-access-kqh72\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.178236 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-495a-account-create-update-k2qtr" event={"ID":"f4bedf0c-06c1-4eaa-b731-3b8c2438456d","Type":"ContainerDied","Data":"98d64788b2e4146b6e0c5cd1d465c9c9a9a919896eb41c7fa3f82de022f57544"} Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.178266 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.178699 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98d64788b2e4146b6e0c5cd1d465c9c9a9a919896eb41c7fa3f82de022f57544" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.179556 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5131-account-create-update-tbcx5" event={"ID":"c85c22cc-99a7-4939-904c-bffa8f2d5457","Type":"ContainerDied","Data":"ce0f4a94b7cc04e1a5dc3729a5db1288e56a94e7c845700c39031a3f59dae308"} Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.179583 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce0f4a94b7cc04e1a5dc3729a5db1288e56a94e7c845700c39031a3f59dae308" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.179588 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.181135 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.181135 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-feed-account-create-update-qcmp4" event={"ID":"58afb54a-7b57-42bc-af7c-13db0bfd1580","Type":"ContainerDied","Data":"87955bc73e5d7963ec43cad569a0fb06119ece1b4213dd7b9ab6c6c4f82ec748"} Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.181349 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87955bc73e5d7963ec43cad569a0fb06119ece1b4213dd7b9ab6c6c4f82ec748" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.182617 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bstt2" event={"ID":"8a01921f-9f43-471a-bbd3-0a7e9bab364e","Type":"ContainerDied","Data":"75efd4936693bd61c471db90ec8c484ef3e49142a4aff746beba6cd676272f98"} Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.182646 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75efd4936693bd61c471db90ec8c484ef3e49142a4aff746beba6cd676272f98" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.182707 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bstt2" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.189897 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6cqnq" event={"ID":"5665f568-1d3e-48dc-8a32-7bd9ad02a037","Type":"ContainerStarted","Data":"1513688fede19d670ec6826188636f5ccd6bae62325cf06859976dc36b250613"} Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.212337 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-6cqnq" podStartSLOduration=2.937211729 podStartE2EDuration="10.212314897s" podCreationTimestamp="2026-01-30 08:48:59 +0000 UTC" firstStartedPulling="2026-01-30 08:49:01.098463044 +0000 UTC m=+1146.070774595" lastFinishedPulling="2026-01-30 08:49:08.373566212 +0000 UTC m=+1153.345877763" observedRunningTime="2026-01-30 08:49:09.211831892 +0000 UTC m=+1154.184143443" watchObservedRunningTime="2026-01-30 08:49:09.212314897 +0000 UTC m=+1154.184626448" Jan 30 08:49:10 crc kubenswrapper[4758]: I0130 08:49:10.199358 4758 generic.go:334] "Generic (PLEG): container finished" podID="d2e127b5-11e2-40a6-8389-a9d08b8cae4f" containerID="2906f8c6c80f27b1b5d6a346bfa1c2bccd0d8111cf2311c1ea9793ee001e6172" exitCode=0 Jan 30 08:49:10 crc kubenswrapper[4758]: I0130 08:49:10.199381 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kc8cg" event={"ID":"d2e127b5-11e2-40a6-8389-a9d08b8cae4f","Type":"ContainerDied","Data":"2906f8c6c80f27b1b5d6a346bfa1c2bccd0d8111cf2311c1ea9793ee001e6172"} Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.579009 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kc8cg" Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.595223 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-db-sync-config-data\") pod \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.595453 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7nc4\" (UniqueName: \"kubernetes.io/projected/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-kube-api-access-g7nc4\") pod \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.595507 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-config-data\") pod \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.595574 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-combined-ca-bundle\") pod \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.611975 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d2e127b5-11e2-40a6-8389-a9d08b8cae4f" (UID: "d2e127b5-11e2-40a6-8389-a9d08b8cae4f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.621387 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-kube-api-access-g7nc4" (OuterVolumeSpecName: "kube-api-access-g7nc4") pod "d2e127b5-11e2-40a6-8389-a9d08b8cae4f" (UID: "d2e127b5-11e2-40a6-8389-a9d08b8cae4f"). InnerVolumeSpecName "kube-api-access-g7nc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.639297 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2e127b5-11e2-40a6-8389-a9d08b8cae4f" (UID: "d2e127b5-11e2-40a6-8389-a9d08b8cae4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.648877 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-config-data" (OuterVolumeSpecName: "config-data") pod "d2e127b5-11e2-40a6-8389-a9d08b8cae4f" (UID: "d2e127b5-11e2-40a6-8389-a9d08b8cae4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.697372 4758 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.697608 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7nc4\" (UniqueName: \"kubernetes.io/projected/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-kube-api-access-g7nc4\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.697690 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.697748 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.218506 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kc8cg" event={"ID":"d2e127b5-11e2-40a6-8389-a9d08b8cae4f","Type":"ContainerDied","Data":"32a7ffa868b27537045370b7000a548e24a7b459460ee6419c99353ed8b5d949"} Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.219081 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32a7ffa868b27537045370b7000a548e24a7b459460ee6419c99353ed8b5d949" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.218715 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kc8cg" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.648489 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fj5s7"] Jan 30 08:49:12 crc kubenswrapper[4758]: E0130 08:49:12.649105 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85c22cc-99a7-4939-904c-bffa8f2d5457" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649116 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85c22cc-99a7-4939-904c-bffa8f2d5457" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: E0130 08:49:12.649136 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649142 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: E0130 08:49:12.649156 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a01921f-9f43-471a-bbd3-0a7e9bab364e" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649162 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a01921f-9f43-471a-bbd3-0a7e9bab364e" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: E0130 08:49:12.649169 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58afb54a-7b57-42bc-af7c-13db0bfd1580" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649176 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="58afb54a-7b57-42bc-af7c-13db0bfd1580" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: E0130 08:49:12.649186 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4bedf0c-06c1-4eaa-b731-3b8c2438456d" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649192 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4bedf0c-06c1-4eaa-b731-3b8c2438456d" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: E0130 08:49:12.649199 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2e127b5-11e2-40a6-8389-a9d08b8cae4f" containerName="glance-db-sync" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649206 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e127b5-11e2-40a6-8389-a9d08b8cae4f" containerName="glance-db-sync" Jan 30 08:49:12 crc kubenswrapper[4758]: E0130 08:49:12.649216 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b87c6fc-6491-419a-96bb-9542edb1e8aa" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649222 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b87c6fc-6491-419a-96bb-9542edb1e8aa" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649358 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a01921f-9f43-471a-bbd3-0a7e9bab364e" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649366 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b87c6fc-6491-419a-96bb-9542edb1e8aa" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649378 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649383 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4bedf0c-06c1-4eaa-b731-3b8c2438456d" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649397 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="58afb54a-7b57-42bc-af7c-13db0bfd1580" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649405 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2e127b5-11e2-40a6-8389-a9d08b8cae4f" containerName="glance-db-sync" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649415 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c85c22cc-99a7-4939-904c-bffa8f2d5457" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.650150 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.674590 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fj5s7"] Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.714808 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-dns-svc\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.714862 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.714947 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.715103 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-config\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.715247 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stwf7\" (UniqueName: \"kubernetes.io/projected/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-kube-api-access-stwf7\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.816780 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-dns-svc\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.816852 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.816923 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.816985 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-config\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.817021 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stwf7\" (UniqueName: \"kubernetes.io/projected/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-kube-api-access-stwf7\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.818001 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-config\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.818152 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-dns-svc\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.818325 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.818575 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.852491 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stwf7\" (UniqueName: \"kubernetes.io/projected/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-kube-api-access-stwf7\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.969450 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:13 crc kubenswrapper[4758]: I0130 08:49:13.296282 4758 generic.go:334] "Generic (PLEG): container finished" podID="5665f568-1d3e-48dc-8a32-7bd9ad02a037" containerID="1513688fede19d670ec6826188636f5ccd6bae62325cf06859976dc36b250613" exitCode=0 Jan 30 08:49:13 crc kubenswrapper[4758]: I0130 08:49:13.297601 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6cqnq" event={"ID":"5665f568-1d3e-48dc-8a32-7bd9ad02a037","Type":"ContainerDied","Data":"1513688fede19d670ec6826188636f5ccd6bae62325cf06859976dc36b250613"} Jan 30 08:49:13 crc kubenswrapper[4758]: I0130 08:49:13.544026 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fj5s7"] Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.305813 4758 generic.go:334] "Generic (PLEG): container finished" podID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" containerID="b9fc2b9b944e1e445bbffd968c0e5c3fd8f7cf9a43829966a1d32764c98c77b9" exitCode=0 Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.305927 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" event={"ID":"d114ad35-e2cf-4ff7-8dfc-d747a159de5d","Type":"ContainerDied","Data":"b9fc2b9b944e1e445bbffd968c0e5c3fd8f7cf9a43829966a1d32764c98c77b9"} Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.307160 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" event={"ID":"d114ad35-e2cf-4ff7-8dfc-d747a159de5d","Type":"ContainerStarted","Data":"a90ebfb23c1318d1d27a1d94512248bbaa3e37a016b8932a8f0ed5677213f22c"} Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.637285 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.676380 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-combined-ca-bundle\") pod \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.676534 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-config-data\") pod \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.676647 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmnbx\" (UniqueName: \"kubernetes.io/projected/5665f568-1d3e-48dc-8a32-7bd9ad02a037-kube-api-access-gmnbx\") pod \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.690233 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5665f568-1d3e-48dc-8a32-7bd9ad02a037-kube-api-access-gmnbx" (OuterVolumeSpecName: "kube-api-access-gmnbx") pod "5665f568-1d3e-48dc-8a32-7bd9ad02a037" (UID: "5665f568-1d3e-48dc-8a32-7bd9ad02a037"). InnerVolumeSpecName "kube-api-access-gmnbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.716519 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5665f568-1d3e-48dc-8a32-7bd9ad02a037" (UID: "5665f568-1d3e-48dc-8a32-7bd9ad02a037"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.722843 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-config-data" (OuterVolumeSpecName: "config-data") pod "5665f568-1d3e-48dc-8a32-7bd9ad02a037" (UID: "5665f568-1d3e-48dc-8a32-7bd9ad02a037"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.779238 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmnbx\" (UniqueName: \"kubernetes.io/projected/5665f568-1d3e-48dc-8a32-7bd9ad02a037-kube-api-access-gmnbx\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.779279 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.779290 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.316553 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6cqnq" event={"ID":"5665f568-1d3e-48dc-8a32-7bd9ad02a037","Type":"ContainerDied","Data":"d1f4dcf77cdd5aea59016fd8daa905932b1869616b357e10e2c333b4571e8106"} Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.317682 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1f4dcf77cdd5aea59016fd8daa905932b1869616b357e10e2c333b4571e8106" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.316578 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.318431 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" event={"ID":"d114ad35-e2cf-4ff7-8dfc-d747a159de5d","Type":"ContainerStarted","Data":"0907523829ec21a592ed73a39e4715a74dfc754e114890f00161aa67a23fe215"} Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.318648 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.338345 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" podStartSLOduration=3.338323347 podStartE2EDuration="3.338323347s" podCreationTimestamp="2026-01-30 08:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:15.337936106 +0000 UTC m=+1160.310247667" watchObservedRunningTime="2026-01-30 08:49:15.338323347 +0000 UTC m=+1160.310634898" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.546719 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fj5s7"] Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.585173 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dxjh5"] Jan 30 08:49:15 crc kubenswrapper[4758]: E0130 08:49:15.585572 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5665f568-1d3e-48dc-8a32-7bd9ad02a037" containerName="keystone-db-sync" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.585591 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="5665f568-1d3e-48dc-8a32-7bd9ad02a037" containerName="keystone-db-sync" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.585753 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="5665f568-1d3e-48dc-8a32-7bd9ad02a037" containerName="keystone-db-sync" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.586622 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.616365 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8tkhn"] Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.617395 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.619961 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.620113 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w5f5m" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.620948 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.621050 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.621373 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.631357 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dxjh5"] Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.651444 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8tkhn"] Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694193 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-config-data\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694264 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-config\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694304 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j595n\" (UniqueName: \"kubernetes.io/projected/70857a89-e946-4f1d-b19b-fbbd9445de0f-kube-api-access-j595n\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694341 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694374 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-dns-svc\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694403 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-combined-ca-bundle\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694451 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-scripts\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694527 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-credential-keys\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694551 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-fernet-keys\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694575 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bgc7\" (UniqueName: \"kubernetes.io/projected/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-kube-api-access-7bgc7\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694605 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795636 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-credential-keys\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795691 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-fernet-keys\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795725 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bgc7\" (UniqueName: \"kubernetes.io/projected/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-kube-api-access-7bgc7\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795749 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795768 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-config-data\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795800 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-config\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795822 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j595n\" (UniqueName: \"kubernetes.io/projected/70857a89-e946-4f1d-b19b-fbbd9445de0f-kube-api-access-j595n\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795843 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795865 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-dns-svc\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795888 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-combined-ca-bundle\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795918 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-scripts\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.797263 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.798824 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-dns-svc\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.799538 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.800214 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-config\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.805100 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-combined-ca-bundle\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.810879 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-fernet-keys\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.810990 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-credential-keys\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.811438 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-config-data\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.824595 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-scripts\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.839864 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bgc7\" (UniqueName: \"kubernetes.io/projected/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-kube-api-access-7bgc7\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.866805 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j595n\" (UniqueName: \"kubernetes.io/projected/70857a89-e946-4f1d-b19b-fbbd9445de0f-kube-api-access-j595n\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.911448 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.945709 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5755df7977-7khvs"] Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.947115 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.955433 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-pzl98" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.955485 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.955444 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.955677 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.976432 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.014871 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5755df7977-7khvs"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.105122 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-scripts\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.105179 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fq8m\" (UniqueName: \"kubernetes.io/projected/57c2a333-014d-4c26-b459-fd88537d21ad-kube-api-access-7fq8m\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.105201 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/57c2a333-014d-4c26-b459-fd88537d21ad-horizon-secret-key\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.105225 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-config-data\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.105294 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c2a333-014d-4c26-b459-fd88537d21ad-logs\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.105373 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-x2v6d"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.106589 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.113632 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.124409 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-89x58" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.128270 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.148084 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-x2v6d"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209425 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fq8m\" (UniqueName: \"kubernetes.io/projected/57c2a333-014d-4c26-b459-fd88537d21ad-kube-api-access-7fq8m\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209499 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/57c2a333-014d-4c26-b459-fd88537d21ad-horizon-secret-key\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209537 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvklh\" (UniqueName: \"kubernetes.io/projected/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-kube-api-access-rvklh\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209599 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-config-data\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209641 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-etc-machine-id\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209670 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-config-data\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209754 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c2a333-014d-4c26-b459-fd88537d21ad-logs\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209797 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-db-sync-config-data\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209824 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-combined-ca-bundle\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209879 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-scripts\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209906 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-scripts\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.211786 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c2a333-014d-4c26-b459-fd88537d21ad-logs\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.214667 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-scripts\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.218082 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-config-data\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.248003 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/57c2a333-014d-4c26-b459-fd88537d21ad-horizon-secret-key\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.248147 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-lqbmm"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.264782 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.274672 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.275241 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.275471 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-tdg2m" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.290720 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fq8m\" (UniqueName: \"kubernetes.io/projected/57c2a333-014d-4c26-b459-fd88537d21ad-kube-api-access-7fq8m\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311028 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-db-sync-config-data\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311092 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-combined-ca-bundle\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311123 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmcjv\" (UniqueName: \"kubernetes.io/projected/11f7c236-867a-465b-9514-de6a765b312b-kube-api-access-wmcjv\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311175 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-scripts\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311195 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11f7c236-867a-465b-9514-de6a765b312b-logs\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311250 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvklh\" (UniqueName: \"kubernetes.io/projected/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-kube-api-access-rvklh\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311272 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-config-data\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311308 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-etc-machine-id\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311326 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-config-data\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311361 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-combined-ca-bundle\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311381 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-scripts\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311691 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dxjh5"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.319970 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-etc-machine-id\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.330651 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-combined-ca-bundle\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.333604 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-scripts\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.337925 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-db-sync-config-data\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.340725 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-config-data\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.375119 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lqbmm"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.405152 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvklh\" (UniqueName: \"kubernetes.io/projected/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-kube-api-access-rvklh\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.411433 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56798b757f-7trbn"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.412843 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmcjv\" (UniqueName: \"kubernetes.io/projected/11f7c236-867a-465b-9514-de6a765b312b-kube-api-access-wmcjv\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.412980 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11f7c236-867a-465b-9514-de6a765b312b-logs\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.413146 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-config-data\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.413345 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-combined-ca-bundle\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.413450 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-scripts\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.413540 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11f7c236-867a-465b-9514-de6a765b312b-logs\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.413169 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.421189 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-config-data\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.440908 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-scripts\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.441507 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-combined-ca-bundle\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.457152 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmcjv\" (UniqueName: \"kubernetes.io/projected/11f7c236-867a-465b-9514-de6a765b312b-kube-api-access-wmcjv\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.459126 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-c24lg"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.460946 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.462883 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4bspq" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.463132 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.493902 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-7trbn"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.505915 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-c24lg"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.514265 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-dns-svc\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.514330 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.514371 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lrbj\" (UniqueName: \"kubernetes.io/projected/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-kube-api-access-9lrbj\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.514400 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.514460 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-combined-ca-bundle\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.514507 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-db-sync-config-data\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.514540 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-config\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.514603 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v55sf\" (UniqueName: \"kubernetes.io/projected/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-kube-api-access-v55sf\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.524737 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.539357 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.544788 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.545419 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.565357 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.566770 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57679b99fc-55gj9"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.568864 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.594059 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-z5hrj"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.597782 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.598922 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.605650 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.605866 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.605977 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-t9t64" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.616061 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-combined-ca-bundle\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.616125 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-db-sync-config-data\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.616156 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-config\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.616213 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v55sf\" (UniqueName: \"kubernetes.io/projected/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-kube-api-access-v55sf\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.616265 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-dns-svc\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.616302 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.616334 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lrbj\" (UniqueName: \"kubernetes.io/projected/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-kube-api-access-9lrbj\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.616361 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.617645 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.618282 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-dns-svc\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.618803 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.619963 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-config\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.630250 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.631938 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-combined-ca-bundle\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.643907 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-db-sync-config-data\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.651305 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.710999 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v55sf\" (UniqueName: \"kubernetes.io/projected/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-kube-api-access-v55sf\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.724583 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lrbj\" (UniqueName: \"kubernetes.io/projected/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-kube-api-access-9lrbj\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.725901 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-config-data\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.727453 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsgzh\" (UniqueName: \"kubernetes.io/projected/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-kube-api-access-dsgzh\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.727498 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-scripts\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.741761 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2rft\" (UniqueName: \"kubernetes.io/projected/b166e095-ba6b-443f-8c0a-0e83bb698ccd-kube-api-access-v2rft\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.741882 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-log-httpd\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.741909 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-config\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742026 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqdnc\" (UniqueName: \"kubernetes.io/projected/6a53958b-ee42-4dea-af5a-086e825f672e-kube-api-access-bqdnc\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742193 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-scripts\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742268 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-run-httpd\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742295 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742324 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a53958b-ee42-4dea-af5a-086e825f672e-horizon-secret-key\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742397 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742464 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-combined-ca-bundle\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742487 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-config-data\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742517 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a53958b-ee42-4dea-af5a-086e825f672e-logs\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.760672 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.799765 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-z5hrj"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.801743 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845520 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-scripts\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845577 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-run-httpd\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845610 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845634 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a53958b-ee42-4dea-af5a-086e825f672e-horizon-secret-key\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845670 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845714 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-combined-ca-bundle\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845735 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-config-data\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845756 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a53958b-ee42-4dea-af5a-086e825f672e-logs\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845802 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-config-data\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845845 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgzh\" (UniqueName: \"kubernetes.io/projected/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-kube-api-access-dsgzh\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845873 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-scripts\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845899 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2rft\" (UniqueName: \"kubernetes.io/projected/b166e095-ba6b-443f-8c0a-0e83bb698ccd-kube-api-access-v2rft\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845943 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-log-httpd\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845965 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-config\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.846004 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqdnc\" (UniqueName: \"kubernetes.io/projected/6a53958b-ee42-4dea-af5a-086e825f672e-kube-api-access-bqdnc\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.847213 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-scripts\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.847576 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-run-httpd\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.851627 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-log-httpd\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.854554 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57679b99fc-55gj9"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.857109 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-combined-ca-bundle\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.861612 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-config-data\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.861920 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a53958b-ee42-4dea-af5a-086e825f672e-logs\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.907205 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgzh\" (UniqueName: \"kubernetes.io/projected/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-kube-api-access-dsgzh\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.907707 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-scripts\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.910465 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqdnc\" (UniqueName: \"kubernetes.io/projected/6a53958b-ee42-4dea-af5a-086e825f672e-kube-api-access-bqdnc\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.918763 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.926929 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.922572 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.923410 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a53958b-ee42-4dea-af5a-086e825f672e-horizon-secret-key\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.923641 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-config-data\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.930871 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2rft\" (UniqueName: \"kubernetes.io/projected/b166e095-ba6b-443f-8c0a-0e83bb698ccd-kube-api-access-v2rft\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.931331 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.921765 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-config\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.952641 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.953394 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.953705 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wpw9v" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.953784 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.964915 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.014860 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.045440 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.046946 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.050782 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.051105 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.064827 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.066829 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.066887 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.066934 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.066974 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqxlb\" (UniqueName: \"kubernetes.io/projected/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-kube-api-access-gqxlb\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.067064 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.067112 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.067144 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.067175 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-logs\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.112726 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.185959 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.186180 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.186655 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.186736 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.186795 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-logs\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.186835 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.186860 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.186936 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.186955 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdsnc\" (UniqueName: \"kubernetes.io/projected/05ff1946-6081-48fc-9474-e434068abc50-kube-api-access-mdsnc\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.187050 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.187082 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.187149 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.187210 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqxlb\" (UniqueName: \"kubernetes.io/projected/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-kube-api-access-gqxlb\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.187260 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.187363 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-logs\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.187966 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.194974 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.208310 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-logs\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.210564 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.220204 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.268188 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.270161 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.271630 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.333210 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.333345 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-logs\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.333454 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.333480 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.333558 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.333584 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.333654 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.333672 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdsnc\" (UniqueName: \"kubernetes.io/projected/05ff1946-6081-48fc-9474-e434068abc50-kube-api-access-mdsnc\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.340775 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.341409 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqxlb\" (UniqueName: \"kubernetes.io/projected/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-kube-api-access-gqxlb\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.342726 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.343007 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-logs\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.349118 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.361168 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.364254 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.368364 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.369014 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.371609 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.378910 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdsnc\" (UniqueName: \"kubernetes.io/projected/05ff1946-6081-48fc-9474-e434068abc50-kube-api-access-mdsnc\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.437311 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" podUID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" containerName="dnsmasq-dns" containerID="cri-o://0907523829ec21a592ed73a39e4715a74dfc754e114890f00161aa67a23fe215" gracePeriod=10 Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.490767 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dxjh5"] Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.499222 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.509000 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.526017 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8tkhn"] Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.549092 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.746967 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5755df7977-7khvs"] Jan 30 08:49:17 crc kubenswrapper[4758]: W0130 08:49:17.830555 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57c2a333_014d_4c26_b459_fd88537d21ad.slice/crio-5f75c2b63d8a8e525916006c6cde4c59530db436c380e604f17c1c8a8b4e28b8 WatchSource:0}: Error finding container 5f75c2b63d8a8e525916006c6cde4c59530db436c380e604f17c1c8a8b4e28b8: Status 404 returned error can't find the container with id 5f75c2b63d8a8e525916006c6cde4c59530db436c380e604f17c1c8a8b4e28b8 Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.984977 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:49:17 crc kubenswrapper[4758]: E0130 08:49:17.985320 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:49:17 crc kubenswrapper[4758]: E0130 08:49:17.985337 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:49:17 crc kubenswrapper[4758]: E0130 08:49:17.985381 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:50:21.985364739 +0000 UTC m=+1226.957676291 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.064778 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lqbmm"] Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.198637 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-7trbn"] Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.400741 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-c24lg"] Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.435468 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lqbmm" event={"ID":"11f7c236-867a-465b-9514-de6a765b312b","Type":"ContainerStarted","Data":"846fcd3c2e04f783d1345c62f282c95a03f7f69db8ece0d61ccbe690d3fc4153"} Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.449065 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8tkhn" event={"ID":"f98ba341-0349-4a6f-ae1d-49f5a794d9c9","Type":"ContainerStarted","Data":"bfb23d59a6ccdaa35ef6bbd15a521f36a60dd5c317e87318f0ba3a0e70190244"} Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.450105 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" event={"ID":"70857a89-e946-4f1d-b19b-fbbd9445de0f","Type":"ContainerStarted","Data":"6e06838e5e1b2a6a9c24419729a2a4691e8bf4541d5b3b48a3967469fe7008e7"} Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.465825 4758 generic.go:334] "Generic (PLEG): container finished" podID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" containerID="0907523829ec21a592ed73a39e4715a74dfc754e114890f00161aa67a23fe215" exitCode=0 Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.465899 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" event={"ID":"d114ad35-e2cf-4ff7-8dfc-d747a159de5d","Type":"ContainerDied","Data":"0907523829ec21a592ed73a39e4715a74dfc754e114890f00161aa67a23fe215"} Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.467843 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-7trbn" event={"ID":"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06","Type":"ContainerStarted","Data":"21f1c5561b1c8dc8979b30b5fd4e9ebe5b905ba1454066061809e405ff87c8bc"} Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.469484 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5755df7977-7khvs" event={"ID":"57c2a333-014d-4c26-b459-fd88537d21ad","Type":"ContainerStarted","Data":"5f75c2b63d8a8e525916006c6cde4c59530db436c380e604f17c1c8a8b4e28b8"} Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.616807 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57679b99fc-55gj9"] Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.711287 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-x2v6d"] Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.828955 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-z5hrj"] Jan 30 08:49:19 crc kubenswrapper[4758]: I0130 08:49:19.010271 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:49:19 crc kubenswrapper[4758]: I0130 08:49:19.096563 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:19 crc kubenswrapper[4758]: I0130 08:49:19.995432 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.196246 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.297499 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5755df7977-7khvs"] Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.315796 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-596b6b9c4f-tqm7h"] Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.320431 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.357778 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jltl6\" (UniqueName: \"kubernetes.io/projected/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-kube-api-access-jltl6\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.357826 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-config-data\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.357877 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-logs\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.357950 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-horizon-secret-key\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.357976 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-scripts\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.376170 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-596b6b9c4f-tqm7h"] Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.459138 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jltl6\" (UniqueName: \"kubernetes.io/projected/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-kube-api-access-jltl6\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.459197 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-config-data\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.459253 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-logs\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.459341 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-horizon-secret-key\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.459376 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-scripts\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.460357 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-scripts\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.461646 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-config-data\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.461898 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-logs\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.475798 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.490430 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.490735 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-horizon-secret-key\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.501925 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jltl6\" (UniqueName: \"kubernetes.io/projected/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-kube-api-access-jltl6\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.646006 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:21 crc kubenswrapper[4758]: W0130 08:49:21.241527 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12ff21aa_edae_4f56_a2ea_be0deb2d84d7.slice/crio-56863fe421871dd9708f2fb55d75c349707fee747f1bbd765e9d58a33bee80d2 WatchSource:0}: Error finding container 56863fe421871dd9708f2fb55d75c349707fee747f1bbd765e9d58a33bee80d2: Status 404 returned error can't find the container with id 56863fe421871dd9708f2fb55d75c349707fee747f1bbd765e9d58a33bee80d2 Jan 30 08:49:21 crc kubenswrapper[4758]: W0130 08:49:21.245518 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a53958b_ee42_4dea_af5a_086e825f672e.slice/crio-8b214db64125952356223045bfea0cadf16d3ed9ce368671c5c068061d4b7fe9 WatchSource:0}: Error finding container 8b214db64125952356223045bfea0cadf16d3ed9ce368671c5c068061d4b7fe9: Status 404 returned error can't find the container with id 8b214db64125952356223045bfea0cadf16d3ed9ce368671c5c068061d4b7fe9 Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.407329 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.489410 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-config\") pod \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.489511 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-nb\") pod \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.489623 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-dns-svc\") pod \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.489740 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stwf7\" (UniqueName: \"kubernetes.io/projected/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-kube-api-access-stwf7\") pod \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.489829 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-sb\") pod \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.534776 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z5hrj" event={"ID":"b166e095-ba6b-443f-8c0a-0e83bb698ccd","Type":"ContainerStarted","Data":"f883e7737939e0c54e462c2bd84b323261d5728c320e964b9fa1343f84ca779a"} Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.541765 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-kube-api-access-stwf7" (OuterVolumeSpecName: "kube-api-access-stwf7") pod "d114ad35-e2cf-4ff7-8dfc-d747a159de5d" (UID: "d114ad35-e2cf-4ff7-8dfc-d747a159de5d"). InnerVolumeSpecName "kube-api-access-stwf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.547128 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57679b99fc-55gj9" event={"ID":"6a53958b-ee42-4dea-af5a-086e825f672e","Type":"ContainerStarted","Data":"8b214db64125952356223045bfea0cadf16d3ed9ce368671c5c068061d4b7fe9"} Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.552935 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x2v6d" event={"ID":"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3","Type":"ContainerStarted","Data":"3de4a4bfe0a1b2d053131f232fdec07f1fa0eaa08093b340deb9c2fbfcbc2d4c"} Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.559712 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85d0c75c-159b-45a5-9d5d-a9030f2d06a6","Type":"ContainerStarted","Data":"e0a1dba81e1c51abf7fa44611674e5a569a817032b5b33cdc56d53bd0efa011b"} Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.565792 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05ff1946-6081-48fc-9474-e434068abc50","Type":"ContainerStarted","Data":"3f904bbcb38b17c79b221e571054b48c5a966635c0e2f3068714106112c9f841"} Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.567219 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerStarted","Data":"c0607f79147011ac217f4da35286aea41ac3abd388dd413c6012b3dbc3df6b9a"} Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.593709 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-config" (OuterVolumeSpecName: "config") pod "d114ad35-e2cf-4ff7-8dfc-d747a159de5d" (UID: "d114ad35-e2cf-4ff7-8dfc-d747a159de5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.593940 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-config\") pod \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " Jan 30 08:49:21 crc kubenswrapper[4758]: W0130 08:49:21.594235 4758 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d114ad35-e2cf-4ff7-8dfc-d747a159de5d/volumes/kubernetes.io~configmap/config Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.594249 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-config" (OuterVolumeSpecName: "config") pod "d114ad35-e2cf-4ff7-8dfc-d747a159de5d" (UID: "d114ad35-e2cf-4ff7-8dfc-d747a159de5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.594389 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.594551 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.594575 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stwf7\" (UniqueName: \"kubernetes.io/projected/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-kube-api-access-stwf7\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.594601 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" event={"ID":"d114ad35-e2cf-4ff7-8dfc-d747a159de5d","Type":"ContainerDied","Data":"a90ebfb23c1318d1d27a1d94512248bbaa3e37a016b8932a8f0ed5677213f22c"} Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.594648 4758 scope.go:117] "RemoveContainer" containerID="0907523829ec21a592ed73a39e4715a74dfc754e114890f00161aa67a23fe215" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.596986 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d114ad35-e2cf-4ff7-8dfc-d747a159de5d" (UID: "d114ad35-e2cf-4ff7-8dfc-d747a159de5d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.598399 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c24lg" event={"ID":"12ff21aa-edae-4f56-a2ea-be0deb2d84d7","Type":"ContainerStarted","Data":"56863fe421871dd9708f2fb55d75c349707fee747f1bbd765e9d58a33bee80d2"} Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.611056 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d114ad35-e2cf-4ff7-8dfc-d747a159de5d" (UID: "d114ad35-e2cf-4ff7-8dfc-d747a159de5d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.641183 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d114ad35-e2cf-4ff7-8dfc-d747a159de5d" (UID: "d114ad35-e2cf-4ff7-8dfc-d747a159de5d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.698689 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.698715 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.698723 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.725622 4758 scope.go:117] "RemoveContainer" containerID="b9fc2b9b944e1e445bbffd968c0e5c3fd8f7cf9a43829966a1d32764c98c77b9" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.934936 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fj5s7"] Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.947897 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fj5s7"] Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.124704 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-596b6b9c4f-tqm7h"] Jan 30 08:49:22 crc kubenswrapper[4758]: E0130 08:49:22.130503 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70857a89_e946_4f1d_b19b_fbbd9445de0f.slice/crio-a097fef8a7ede1eeca098d525fa34f182a088c8dafe767edbe11e6ba96d80393.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f95b5bb_621a_4c74_bc39_a5aea5ef4a06.slice/crio-b3ea3c454168822443d7f32c74a100f615de44f286974f28516fa17d776a4519.scope\": RecentStats: unable to find data in memory cache]" Jan 30 08:49:22 crc kubenswrapper[4758]: W0130 08:49:22.173435 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1a59ac3_5eae_4e76_a7b0_5e3d4395515c.slice/crio-4c28043680fea053f92ce9f17f24055029f33da223c2e4879e135172e9b327fa WatchSource:0}: Error finding container 4c28043680fea053f92ce9f17f24055029f33da223c2e4879e135172e9b327fa: Status 404 returned error can't find the container with id 4c28043680fea053f92ce9f17f24055029f33da223c2e4879e135172e9b327fa Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.635168 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z5hrj" event={"ID":"b166e095-ba6b-443f-8c0a-0e83bb698ccd","Type":"ContainerStarted","Data":"a887d904bf88f7531d24fd0632b3980599f13f39af4a57d591d1cab59676a5bb"} Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.639773 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-596b6b9c4f-tqm7h" event={"ID":"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c","Type":"ContainerStarted","Data":"4c28043680fea053f92ce9f17f24055029f33da223c2e4879e135172e9b327fa"} Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.643395 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8tkhn" event={"ID":"f98ba341-0349-4a6f-ae1d-49f5a794d9c9","Type":"ContainerStarted","Data":"2c3a333cae2d6b2084a8ea4de5cb47b1fe487040458de9660811e92c533cb616"} Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.656335 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-z5hrj" podStartSLOduration=6.65631464 podStartE2EDuration="6.65631464s" podCreationTimestamp="2026-01-30 08:49:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:22.655568806 +0000 UTC m=+1167.627880357" watchObservedRunningTime="2026-01-30 08:49:22.65631464 +0000 UTC m=+1167.628626191" Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.662142 4758 generic.go:334] "Generic (PLEG): container finished" podID="70857a89-e946-4f1d-b19b-fbbd9445de0f" containerID="a097fef8a7ede1eeca098d525fa34f182a088c8dafe767edbe11e6ba96d80393" exitCode=0 Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.662356 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" event={"ID":"70857a89-e946-4f1d-b19b-fbbd9445de0f","Type":"ContainerDied","Data":"a097fef8a7ede1eeca098d525fa34f182a088c8dafe767edbe11e6ba96d80393"} Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.672675 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85d0c75c-159b-45a5-9d5d-a9030f2d06a6","Type":"ContainerStarted","Data":"3b85b634aa2d9351742fa8b9b33587be915cb87e3c8564f41b0c1fb679584f2c"} Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.682675 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8tkhn" podStartSLOduration=7.682661167 podStartE2EDuration="7.682661167s" podCreationTimestamp="2026-01-30 08:49:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:22.682382888 +0000 UTC m=+1167.654694449" watchObservedRunningTime="2026-01-30 08:49:22.682661167 +0000 UTC m=+1167.654972718" Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.686772 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05ff1946-6081-48fc-9474-e434068abc50","Type":"ContainerStarted","Data":"b45fc4ecb4679d6da4f280f347e3f210ef87260ed8eae98565dd8a877be3d864"} Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.692924 4758 generic.go:334] "Generic (PLEG): container finished" podID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" containerID="b3ea3c454168822443d7f32c74a100f615de44f286974f28516fa17d776a4519" exitCode=0 Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.693009 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-7trbn" event={"ID":"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06","Type":"ContainerDied","Data":"b3ea3c454168822443d7f32c74a100f615de44f286974f28516fa17d776a4519"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.250325 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.350633 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-config\") pod \"70857a89-e946-4f1d-b19b-fbbd9445de0f\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.350689 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-sb\") pod \"70857a89-e946-4f1d-b19b-fbbd9445de0f\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.350750 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j595n\" (UniqueName: \"kubernetes.io/projected/70857a89-e946-4f1d-b19b-fbbd9445de0f-kube-api-access-j595n\") pod \"70857a89-e946-4f1d-b19b-fbbd9445de0f\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.350793 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-dns-svc\") pod \"70857a89-e946-4f1d-b19b-fbbd9445de0f\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.350867 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-nb\") pod \"70857a89-e946-4f1d-b19b-fbbd9445de0f\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.356113 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70857a89-e946-4f1d-b19b-fbbd9445de0f-kube-api-access-j595n" (OuterVolumeSpecName: "kube-api-access-j595n") pod "70857a89-e946-4f1d-b19b-fbbd9445de0f" (UID: "70857a89-e946-4f1d-b19b-fbbd9445de0f"). InnerVolumeSpecName "kube-api-access-j595n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.411018 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "70857a89-e946-4f1d-b19b-fbbd9445de0f" (UID: "70857a89-e946-4f1d-b19b-fbbd9445de0f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.415763 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "70857a89-e946-4f1d-b19b-fbbd9445de0f" (UID: "70857a89-e946-4f1d-b19b-fbbd9445de0f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.417361 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70857a89-e946-4f1d-b19b-fbbd9445de0f" (UID: "70857a89-e946-4f1d-b19b-fbbd9445de0f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.451199 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-config" (OuterVolumeSpecName: "config") pod "70857a89-e946-4f1d-b19b-fbbd9445de0f" (UID: "70857a89-e946-4f1d-b19b-fbbd9445de0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.452537 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.452826 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.452836 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j595n\" (UniqueName: \"kubernetes.io/projected/70857a89-e946-4f1d-b19b-fbbd9445de0f-kube-api-access-j595n\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.452845 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.452853 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.726508 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-7trbn" event={"ID":"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06","Type":"ContainerStarted","Data":"f78641dc0545684ea84c19e7528fb1e66fc58f4b4489db98333bbe75698bd8be"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.726661 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.730096 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" event={"ID":"70857a89-e946-4f1d-b19b-fbbd9445de0f","Type":"ContainerDied","Data":"6e06838e5e1b2a6a9c24419729a2a4691e8bf4541d5b3b48a3967469fe7008e7"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.730133 4758 scope.go:117] "RemoveContainer" containerID="a097fef8a7ede1eeca098d525fa34f182a088c8dafe767edbe11e6ba96d80393" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.730254 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.736347 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05ff1946-6081-48fc-9474-e434068abc50","Type":"ContainerStarted","Data":"7383091d38cac724e66398a4b68f112aeb1b874ffd514f067ea5e896a6467eab"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.748004 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56798b757f-7trbn" podStartSLOduration=7.747984983 podStartE2EDuration="7.747984983s" podCreationTimestamp="2026-01-30 08:49:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:23.745008538 +0000 UTC m=+1168.717320099" watchObservedRunningTime="2026-01-30 08:49:23.747984983 +0000 UTC m=+1168.720296534" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.798819 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" path="/var/lib/kubelet/pods/d114ad35-e2cf-4ff7-8dfc-d747a159de5d/volumes" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.814284 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dxjh5"] Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.826166 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dxjh5"] Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:24.760149 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85d0c75c-159b-45a5-9d5d-a9030f2d06a6","Type":"ContainerStarted","Data":"0024bb61939ae591ee9587be4bfcbfa247433972e7c710fd8e8ca1c870431083"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:24.760566 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="05ff1946-6081-48fc-9474-e434068abc50" containerName="glance-log" containerID="cri-o://b45fc4ecb4679d6da4f280f347e3f210ef87260ed8eae98565dd8a877be3d864" gracePeriod=30 Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:24.760636 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="05ff1946-6081-48fc-9474-e434068abc50" containerName="glance-httpd" containerID="cri-o://7383091d38cac724e66398a4b68f112aeb1b874ffd514f067ea5e896a6467eab" gracePeriod=30 Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:24.785664 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.785642748 podStartE2EDuration="8.785642748s" podCreationTimestamp="2026-01-30 08:49:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:24.782381395 +0000 UTC m=+1169.754692956" watchObservedRunningTime="2026-01-30 08:49:24.785642748 +0000 UTC m=+1169.757954309" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.482433 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57679b99fc-55gj9"] Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.533809 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-76fc974bd8-4mnvj"] Jan 30 08:49:26 crc kubenswrapper[4758]: E0130 08:49:25.538794 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" containerName="dnsmasq-dns" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.538824 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" containerName="dnsmasq-dns" Jan 30 08:49:26 crc kubenswrapper[4758]: E0130 08:49:25.538860 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" containerName="init" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.538868 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" containerName="init" Jan 30 08:49:26 crc kubenswrapper[4758]: E0130 08:49:25.538889 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70857a89-e946-4f1d-b19b-fbbd9445de0f" containerName="init" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.538897 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="70857a89-e946-4f1d-b19b-fbbd9445de0f" containerName="init" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.542235 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" containerName="dnsmasq-dns" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.542275 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="70857a89-e946-4f1d-b19b-fbbd9445de0f" containerName="init" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.543680 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.548647 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.554586 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76fc974bd8-4mnvj"] Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.609215 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-config-data\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.609349 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-secret-key\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.609406 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-tls-certs\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.609482 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365b123c-aa7f-464d-b659-78154f86d42f-logs\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.609530 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm897\" (UniqueName: \"kubernetes.io/projected/365b123c-aa7f-464d-b659-78154f86d42f-kube-api-access-wm897\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.609558 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-combined-ca-bundle\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.609632 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-scripts\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.647820 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-596b6b9c4f-tqm7h"] Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.682768 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5cf698bb7b-gp87v"] Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.684558 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.708986 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cf698bb7b-gp87v"] Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.710887 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm897\" (UniqueName: \"kubernetes.io/projected/365b123c-aa7f-464d-b659-78154f86d42f-kube-api-access-wm897\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.710953 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-combined-ca-bundle\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.710987 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97906db2-3b2d-44ec-af77-d3edf75b7f76-scripts\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711019 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-scripts\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711054 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97906db2-3b2d-44ec-af77-d3edf75b7f76-config-data\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711073 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-config-data\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711119 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-horizon-secret-key\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711158 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-secret-key\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711177 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-horizon-tls-certs\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711194 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97906db2-3b2d-44ec-af77-d3edf75b7f76-logs\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711218 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8m5m\" (UniqueName: \"kubernetes.io/projected/97906db2-3b2d-44ec-af77-d3edf75b7f76-kube-api-access-x8m5m\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711249 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-tls-certs\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711282 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-combined-ca-bundle\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711318 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365b123c-aa7f-464d-b659-78154f86d42f-logs\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.713945 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-scripts\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.714354 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365b123c-aa7f-464d-b659-78154f86d42f-logs\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.715003 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-config-data\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.718729 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-combined-ca-bundle\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.721560 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-tls-certs\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.730868 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-secret-key\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.752865 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm897\" (UniqueName: \"kubernetes.io/projected/365b123c-aa7f-464d-b659-78154f86d42f-kube-api-access-wm897\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.813276 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70857a89-e946-4f1d-b19b-fbbd9445de0f" path="/var/lib/kubelet/pods/70857a89-e946-4f1d-b19b-fbbd9445de0f/volumes" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.813446 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97906db2-3b2d-44ec-af77-d3edf75b7f76-scripts\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.813503 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97906db2-3b2d-44ec-af77-d3edf75b7f76-config-data\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.813576 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-horizon-secret-key\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.813624 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-horizon-tls-certs\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.813647 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97906db2-3b2d-44ec-af77-d3edf75b7f76-logs\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.813684 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8m5m\" (UniqueName: \"kubernetes.io/projected/97906db2-3b2d-44ec-af77-d3edf75b7f76-kube-api-access-x8m5m\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.813723 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-combined-ca-bundle\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.816630 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97906db2-3b2d-44ec-af77-d3edf75b7f76-scripts\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.816905 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97906db2-3b2d-44ec-af77-d3edf75b7f76-logs\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.821129 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-horizon-secret-key\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.827211 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97906db2-3b2d-44ec-af77-d3edf75b7f76-config-data\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.834872 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-horizon-tls-certs\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.848955 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-combined-ca-bundle\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.851518 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05ff1946-6081-48fc-9474-e434068abc50","Type":"ContainerDied","Data":"7383091d38cac724e66398a4b68f112aeb1b874ffd514f067ea5e896a6467eab"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.858914 4758 generic.go:334] "Generic (PLEG): container finished" podID="05ff1946-6081-48fc-9474-e434068abc50" containerID="7383091d38cac724e66398a4b68f112aeb1b874ffd514f067ea5e896a6467eab" exitCode=0 Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.858973 4758 generic.go:334] "Generic (PLEG): container finished" podID="05ff1946-6081-48fc-9474-e434068abc50" containerID="b45fc4ecb4679d6da4f280f347e3f210ef87260ed8eae98565dd8a877be3d864" exitCode=143 Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.859201 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerName="glance-log" containerID="cri-o://3b85b634aa2d9351742fa8b9b33587be915cb87e3c8564f41b0c1fb679584f2c" gracePeriod=30 Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.859345 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05ff1946-6081-48fc-9474-e434068abc50","Type":"ContainerDied","Data":"b45fc4ecb4679d6da4f280f347e3f210ef87260ed8eae98565dd8a877be3d864"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.859421 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerName="glance-httpd" containerID="cri-o://0024bb61939ae591ee9587be4bfcbfa247433972e7c710fd8e8ca1c870431083" gracePeriod=30 Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.874961 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8m5m\" (UniqueName: \"kubernetes.io/projected/97906db2-3b2d-44ec-af77-d3edf75b7f76-kube-api-access-x8m5m\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.008677 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.042781 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.876173 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.913054 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.913011555 podStartE2EDuration="10.913011555s" podCreationTimestamp="2026-01-30 08:49:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:26.34080098 +0000 UTC m=+1171.313112551" watchObservedRunningTime="2026-01-30 08:49:26.913011555 +0000 UTC m=+1171.885323106" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.917772 4758 generic.go:334] "Generic (PLEG): container finished" podID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerID="3b85b634aa2d9351742fa8b9b33587be915cb87e3c8564f41b0c1fb679584f2c" exitCode=143 Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.917903 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85d0c75c-159b-45a5-9d5d-a9030f2d06a6","Type":"ContainerDied","Data":"3b85b634aa2d9351742fa8b9b33587be915cb87e3c8564f41b0c1fb679584f2c"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.938829 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05ff1946-6081-48fc-9474-e434068abc50","Type":"ContainerDied","Data":"3f904bbcb38b17c79b221e571054b48c5a966635c0e2f3068714106112c9f841"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.938906 4758 scope.go:117] "RemoveContainer" containerID="7383091d38cac724e66398a4b68f112aeb1b874ffd514f067ea5e896a6467eab" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.939403 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.963726 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-config-data\") pod \"05ff1946-6081-48fc-9474-e434068abc50\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.963790 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-httpd-run\") pod \"05ff1946-6081-48fc-9474-e434068abc50\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.963948 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-combined-ca-bundle\") pod \"05ff1946-6081-48fc-9474-e434068abc50\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.964022 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"05ff1946-6081-48fc-9474-e434068abc50\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.964132 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-internal-tls-certs\") pod \"05ff1946-6081-48fc-9474-e434068abc50\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.964198 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-scripts\") pod \"05ff1946-6081-48fc-9474-e434068abc50\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.964233 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdsnc\" (UniqueName: \"kubernetes.io/projected/05ff1946-6081-48fc-9474-e434068abc50-kube-api-access-mdsnc\") pod \"05ff1946-6081-48fc-9474-e434068abc50\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.964281 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-logs\") pod \"05ff1946-6081-48fc-9474-e434068abc50\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.967493 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "05ff1946-6081-48fc-9474-e434068abc50" (UID: "05ff1946-6081-48fc-9474-e434068abc50"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.968499 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.974625 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-logs" (OuterVolumeSpecName: "logs") pod "05ff1946-6081-48fc-9474-e434068abc50" (UID: "05ff1946-6081-48fc-9474-e434068abc50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.994268 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ff1946-6081-48fc-9474-e434068abc50-kube-api-access-mdsnc" (OuterVolumeSpecName: "kube-api-access-mdsnc") pod "05ff1946-6081-48fc-9474-e434068abc50" (UID: "05ff1946-6081-48fc-9474-e434068abc50"). InnerVolumeSpecName "kube-api-access-mdsnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.014566 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-scripts" (OuterVolumeSpecName: "scripts") pod "05ff1946-6081-48fc-9474-e434068abc50" (UID: "05ff1946-6081-48fc-9474-e434068abc50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.015524 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "05ff1946-6081-48fc-9474-e434068abc50" (UID: "05ff1946-6081-48fc-9474-e434068abc50"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.102894 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.102933 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdsnc\" (UniqueName: \"kubernetes.io/projected/05ff1946-6081-48fc-9474-e434068abc50-kube-api-access-mdsnc\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.102947 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.102990 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.121230 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-config-data" (OuterVolumeSpecName: "config-data") pod "05ff1946-6081-48fc-9474-e434068abc50" (UID: "05ff1946-6081-48fc-9474-e434068abc50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.132066 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cf698bb7b-gp87v"] Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.140064 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 30 08:49:27 crc kubenswrapper[4758]: W0130 08:49:27.142227 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97906db2_3b2d_44ec_af77_d3edf75b7f76.slice/crio-7ccf47e82c9ad8ec0bee063d5c40362df8e0161e6ee3b78270b8e187c419763b WatchSource:0}: Error finding container 7ccf47e82c9ad8ec0bee063d5c40362df8e0161e6ee3b78270b8e187c419763b: Status 404 returned error can't find the container with id 7ccf47e82c9ad8ec0bee063d5c40362df8e0161e6ee3b78270b8e187c419763b Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.172448 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05ff1946-6081-48fc-9474-e434068abc50" (UID: "05ff1946-6081-48fc-9474-e434068abc50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.205253 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.205278 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.205289 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.208743 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "05ff1946-6081-48fc-9474-e434068abc50" (UID: "05ff1946-6081-48fc-9474-e434068abc50"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.276370 4758 scope.go:117] "RemoveContainer" containerID="b45fc4ecb4679d6da4f280f347e3f210ef87260ed8eae98565dd8a877be3d864" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.291386 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.295939 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.306924 4758 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.332721 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:27 crc kubenswrapper[4758]: E0130 08:49:27.333260 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ff1946-6081-48fc-9474-e434068abc50" containerName="glance-httpd" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.333278 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ff1946-6081-48fc-9474-e434068abc50" containerName="glance-httpd" Jan 30 08:49:27 crc kubenswrapper[4758]: E0130 08:49:27.333312 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ff1946-6081-48fc-9474-e434068abc50" containerName="glance-log" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.333320 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ff1946-6081-48fc-9474-e434068abc50" containerName="glance-log" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.333504 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="05ff1946-6081-48fc-9474-e434068abc50" containerName="glance-log" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.333513 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="05ff1946-6081-48fc-9474-e434068abc50" containerName="glance-httpd" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.334522 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.338675 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.341809 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.373105 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.411468 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.411508 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-logs\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.411561 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.411580 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.411608 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.411649 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.411710 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.411727 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmcr9\" (UniqueName: \"kubernetes.io/projected/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-kube-api-access-mmcr9\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.440120 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76fc974bd8-4mnvj"] Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.517575 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-logs\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.517747 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.517785 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.517839 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.517900 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.518025 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.518204 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-logs\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.518474 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.518596 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.518631 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmcr9\" (UniqueName: \"kubernetes.io/projected/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-kube-api-access-mmcr9\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.518685 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.522387 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.525198 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.533508 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.534986 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.546276 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmcr9\" (UniqueName: \"kubernetes.io/projected/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-kube-api-access-mmcr9\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.552812 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.668939 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.785456 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05ff1946-6081-48fc-9474-e434068abc50" path="/var/lib/kubelet/pods/05ff1946-6081-48fc-9474-e434068abc50/volumes" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.972693 4758 generic.go:334] "Generic (PLEG): container finished" podID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerID="0024bb61939ae591ee9587be4bfcbfa247433972e7c710fd8e8ca1c870431083" exitCode=0 Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.972776 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85d0c75c-159b-45a5-9d5d-a9030f2d06a6","Type":"ContainerDied","Data":"0024bb61939ae591ee9587be4bfcbfa247433972e7c710fd8e8ca1c870431083"} Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.987377 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf698bb7b-gp87v" event={"ID":"97906db2-3b2d-44ec-af77-d3edf75b7f76","Type":"ContainerStarted","Data":"7ccf47e82c9ad8ec0bee063d5c40362df8e0161e6ee3b78270b8e187c419763b"} Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.991246 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerStarted","Data":"a0f4a512c849aa8b080efc203f40fc92f70f7018d35ed2e40b0a8bd341bb6b8b"} Jan 30 08:49:30 crc kubenswrapper[4758]: I0130 08:49:30.010886 4758 generic.go:334] "Generic (PLEG): container finished" podID="f98ba341-0349-4a6f-ae1d-49f5a794d9c9" containerID="2c3a333cae2d6b2084a8ea4de5cb47b1fe487040458de9660811e92c533cb616" exitCode=0 Jan 30 08:49:30 crc kubenswrapper[4758]: I0130 08:49:30.011076 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8tkhn" event={"ID":"f98ba341-0349-4a6f-ae1d-49f5a794d9c9","Type":"ContainerDied","Data":"2c3a333cae2d6b2084a8ea4de5cb47b1fe487040458de9660811e92c533cb616"} Jan 30 08:49:31 crc kubenswrapper[4758]: I0130 08:49:31.763211 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:31 crc kubenswrapper[4758]: I0130 08:49:31.845334 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-98tgp"] Jan 30 08:49:31 crc kubenswrapper[4758]: I0130 08:49:31.845591 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" containerName="dnsmasq-dns" containerID="cri-o://fbe24d5e6be67695cb5ddcaebff818aae946840f68728ace533f14d150ec3201" gracePeriod=10 Jan 30 08:49:32 crc kubenswrapper[4758]: I0130 08:49:32.036874 4758 generic.go:334] "Generic (PLEG): container finished" podID="05274cdb-49de-4144-85ca-3d46e1790dab" containerID="fbe24d5e6be67695cb5ddcaebff818aae946840f68728ace533f14d150ec3201" exitCode=0 Jan 30 08:49:32 crc kubenswrapper[4758]: I0130 08:49:32.036916 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" event={"ID":"05274cdb-49de-4144-85ca-3d46e1790dab","Type":"ContainerDied","Data":"fbe24d5e6be67695cb5ddcaebff818aae946840f68728ace533f14d150ec3201"} Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.127566 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.835675 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.841956 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.971690 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-credential-keys\") pod \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.971782 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-logs\") pod \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.971818 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-fernet-keys\") pod \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.971869 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-scripts\") pod \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.971932 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-combined-ca-bundle\") pod \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972001 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bgc7\" (UniqueName: \"kubernetes.io/projected/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-kube-api-access-7bgc7\") pod \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972030 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972089 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-scripts\") pod \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972122 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-public-tls-certs\") pod \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972153 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqxlb\" (UniqueName: \"kubernetes.io/projected/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-kube-api-access-gqxlb\") pod \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972206 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-config-data\") pod \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972239 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-combined-ca-bundle\") pod \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972279 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-config-data\") pod \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972311 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-httpd-run\") pod \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.973243 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-logs" (OuterVolumeSpecName: "logs") pod "85d0c75c-159b-45a5-9d5d-a9030f2d06a6" (UID: "85d0c75c-159b-45a5-9d5d-a9030f2d06a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.974462 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "85d0c75c-159b-45a5-9d5d-a9030f2d06a6" (UID: "85d0c75c-159b-45a5-9d5d-a9030f2d06a6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.983935 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f98ba341-0349-4a6f-ae1d-49f5a794d9c9" (UID: "f98ba341-0349-4a6f-ae1d-49f5a794d9c9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.992629 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-kube-api-access-gqxlb" (OuterVolumeSpecName: "kube-api-access-gqxlb") pod "85d0c75c-159b-45a5-9d5d-a9030f2d06a6" (UID: "85d0c75c-159b-45a5-9d5d-a9030f2d06a6"). InnerVolumeSpecName "kube-api-access-gqxlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.993433 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "85d0c75c-159b-45a5-9d5d-a9030f2d06a6" (UID: "85d0c75c-159b-45a5-9d5d-a9030f2d06a6"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.994395 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f98ba341-0349-4a6f-ae1d-49f5a794d9c9" (UID: "f98ba341-0349-4a6f-ae1d-49f5a794d9c9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.006604 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-scripts" (OuterVolumeSpecName: "scripts") pod "f98ba341-0349-4a6f-ae1d-49f5a794d9c9" (UID: "f98ba341-0349-4a6f-ae1d-49f5a794d9c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.006664 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-kube-api-access-7bgc7" (OuterVolumeSpecName: "kube-api-access-7bgc7") pod "f98ba341-0349-4a6f-ae1d-49f5a794d9c9" (UID: "f98ba341-0349-4a6f-ae1d-49f5a794d9c9"). InnerVolumeSpecName "kube-api-access-7bgc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.011558 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-scripts" (OuterVolumeSpecName: "scripts") pod "85d0c75c-159b-45a5-9d5d-a9030f2d06a6" (UID: "85d0c75c-159b-45a5-9d5d-a9030f2d06a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.013766 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-config-data" (OuterVolumeSpecName: "config-data") pod "f98ba341-0349-4a6f-ae1d-49f5a794d9c9" (UID: "f98ba341-0349-4a6f-ae1d-49f5a794d9c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.037248 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f98ba341-0349-4a6f-ae1d-49f5a794d9c9" (UID: "f98ba341-0349-4a6f-ae1d-49f5a794d9c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.047347 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85d0c75c-159b-45a5-9d5d-a9030f2d06a6" (UID: "85d0c75c-159b-45a5-9d5d-a9030f2d06a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.069204 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8tkhn" event={"ID":"f98ba341-0349-4a6f-ae1d-49f5a794d9c9","Type":"ContainerDied","Data":"bfb23d59a6ccdaa35ef6bbd15a521f36a60dd5c317e87318f0ba3a0e70190244"} Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.069247 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfb23d59a6ccdaa35ef6bbd15a521f36a60dd5c317e87318f0ba3a0e70190244" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.069313 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.075919 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076076 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076086 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076095 4758 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076105 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076114 4758 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076125 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076135 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076144 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bgc7\" (UniqueName: \"kubernetes.io/projected/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-kube-api-access-7bgc7\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076168 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076179 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076188 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqxlb\" (UniqueName: \"kubernetes.io/projected/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-kube-api-access-gqxlb\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.082995 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85d0c75c-159b-45a5-9d5d-a9030f2d06a6","Type":"ContainerDied","Data":"e0a1dba81e1c51abf7fa44611674e5a569a817032b5b33cdc56d53bd0efa011b"} Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.083304 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.087266 4758 scope.go:117] "RemoveContainer" containerID="0024bb61939ae591ee9587be4bfcbfa247433972e7c710fd8e8ca1c870431083" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.104869 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.110152 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "85d0c75c-159b-45a5-9d5d-a9030f2d06a6" (UID: "85d0c75c-159b-45a5-9d5d-a9030f2d06a6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.119372 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-config-data" (OuterVolumeSpecName: "config-data") pod "85d0c75c-159b-45a5-9d5d-a9030f2d06a6" (UID: "85d0c75c-159b-45a5-9d5d-a9030f2d06a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.178266 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.178306 4758 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.178316 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.426853 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.446839 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.463101 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:34 crc kubenswrapper[4758]: E0130 08:49:34.463555 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98ba341-0349-4a6f-ae1d-49f5a794d9c9" containerName="keystone-bootstrap" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.463572 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98ba341-0349-4a6f-ae1d-49f5a794d9c9" containerName="keystone-bootstrap" Jan 30 08:49:34 crc kubenswrapper[4758]: E0130 08:49:34.463586 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerName="glance-httpd" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.463591 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerName="glance-httpd" Jan 30 08:49:34 crc kubenswrapper[4758]: E0130 08:49:34.463599 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerName="glance-log" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.463606 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerName="glance-log" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.463764 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98ba341-0349-4a6f-ae1d-49f5a794d9c9" containerName="keystone-bootstrap" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.463783 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerName="glance-httpd" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.463796 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerName="glance-log" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.464691 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.472849 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.475119 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.475385 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.590640 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-logs\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.590707 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.590738 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-config-data\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.590901 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.590952 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-scripts\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.591002 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.591071 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mbs4\" (UniqueName: \"kubernetes.io/projected/1ec2d231-4b26-4cc9-a09d-9091153da8a9-kube-api-access-9mbs4\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.591099 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.692856 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-logs\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693245 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-logs\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693259 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693296 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-config-data\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693393 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693418 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-scripts\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693446 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693482 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mbs4\" (UniqueName: \"kubernetes.io/projected/1ec2d231-4b26-4cc9-a09d-9091153da8a9-kube-api-access-9mbs4\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693499 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693788 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.694094 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.700860 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.701449 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.701564 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-config-data\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.703514 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-scripts\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.717926 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mbs4\" (UniqueName: \"kubernetes.io/projected/1ec2d231-4b26-4cc9-a09d-9091153da8a9-kube-api-access-9mbs4\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.721713 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.803684 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.999664 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8tkhn"] Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.019871 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8tkhn"] Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.054989 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-slg8b"] Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.056169 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.059536 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.059944 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.060106 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w5f5m" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.060264 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.061157 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.066903 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-slg8b"] Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.209413 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-scripts\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.209474 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-fernet-keys\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.209507 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-config-data\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.209562 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-combined-ca-bundle\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.209666 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-credential-keys\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.209689 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln7jd\" (UniqueName: \"kubernetes.io/projected/419e16c4-297d-490a-8fd3-6d365e20f5f2-kube-api-access-ln7jd\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.311252 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-combined-ca-bundle\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.311335 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-credential-keys\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.311354 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln7jd\" (UniqueName: \"kubernetes.io/projected/419e16c4-297d-490a-8fd3-6d365e20f5f2-kube-api-access-ln7jd\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.311400 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-scripts\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.311438 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-fernet-keys\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.311467 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-config-data\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.321566 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-combined-ca-bundle\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.321888 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-config-data\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.322333 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-fernet-keys\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.322748 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-scripts\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.322982 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-credential-keys\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.330191 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln7jd\" (UniqueName: \"kubernetes.io/projected/419e16c4-297d-490a-8fd3-6d365e20f5f2-kube-api-access-ln7jd\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.390648 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.790603 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" path="/var/lib/kubelet/pods/85d0c75c-159b-45a5-9d5d-a9030f2d06a6/volumes" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.792493 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f98ba341-0349-4a6f-ae1d-49f5a794d9c9" path="/var/lib/kubelet/pods/f98ba341-0349-4a6f-ae1d-49f5a794d9c9/volumes" Jan 30 08:49:39 crc kubenswrapper[4758]: E0130 08:49:39.037560 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 08:49:39 crc kubenswrapper[4758]: E0130 08:49:39.038225 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9bh8dhf7h77h579h4h666h5cch58bh5f8h67fh58chddh8h565h8fh644h656hf5h688hfbh667hd6h648h5bhcbh65h5d7h584h69h5cfh5d7q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jltl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-596b6b9c4f-tqm7h_openstack(d1a59ac3-5eae-4e76-a7b0-5e3d4395515c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:49:39 crc kubenswrapper[4758]: E0130 08:49:39.040274 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-596b6b9c4f-tqm7h" podUID="d1a59ac3-5eae-4e76-a7b0-5e3d4395515c" Jan 30 08:49:41 crc kubenswrapper[4758]: E0130 08:49:41.014104 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 08:49:41 crc kubenswrapper[4758]: E0130 08:49:41.014820 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n88hdfh547h5dbh5bdh59h674h5f4hc5h8h64dh75h584h584h4h8ch55bh96hch57hcfh77hc8h697h8fh684hf5h5cdh66dh7dh6h6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fq8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5755df7977-7khvs_openstack(57c2a333-014d-4c26-b459-fd88537d21ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:49:41 crc kubenswrapper[4758]: E0130 08:49:41.016666 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5755df7977-7khvs" podUID="57c2a333-014d-4c26-b459-fd88537d21ad" Jan 30 08:49:41 crc kubenswrapper[4758]: E0130 08:49:41.064183 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 08:49:41 crc kubenswrapper[4758]: E0130 08:49:41.064384 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f7h96h68fh559h8fh5fdh66bh5cdh656h55ch99h66bh567h549h99h599h57ch585h699hc8h5c4hdfhf8h9bh644hd8h567h5c9hf4h699hc7hc5q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bqdnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-57679b99fc-55gj9_openstack(6a53958b-ee42-4dea-af5a-086e825f672e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:49:41 crc kubenswrapper[4758]: E0130 08:49:41.066646 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-57679b99fc-55gj9" podUID="6a53958b-ee42-4dea-af5a-086e825f672e" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.083356 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.159082 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.159301 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl29t\" (UniqueName: \"kubernetes.io/projected/05274cdb-49de-4144-85ca-3d46e1790dab-kube-api-access-xl29t\") pod \"05274cdb-49de-4144-85ca-3d46e1790dab\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.159411 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-dns-svc\") pod \"05274cdb-49de-4144-85ca-3d46e1790dab\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.159522 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" event={"ID":"05274cdb-49de-4144-85ca-3d46e1790dab","Type":"ContainerDied","Data":"f990ab28ecc68fe1f154128716943b17b2550ded5f1eb119ad695ca6b95e4ded"} Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.159593 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-nb\") pod \"05274cdb-49de-4144-85ca-3d46e1790dab\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.159691 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-config\") pod \"05274cdb-49de-4144-85ca-3d46e1790dab\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.159774 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-sb\") pod \"05274cdb-49de-4144-85ca-3d46e1790dab\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.195662 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05274cdb-49de-4144-85ca-3d46e1790dab-kube-api-access-xl29t" (OuterVolumeSpecName: "kube-api-access-xl29t") pod "05274cdb-49de-4144-85ca-3d46e1790dab" (UID: "05274cdb-49de-4144-85ca-3d46e1790dab"). InnerVolumeSpecName "kube-api-access-xl29t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.253564 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "05274cdb-49de-4144-85ca-3d46e1790dab" (UID: "05274cdb-49de-4144-85ca-3d46e1790dab"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.272414 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl29t\" (UniqueName: \"kubernetes.io/projected/05274cdb-49de-4144-85ca-3d46e1790dab-kube-api-access-xl29t\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.272450 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.272845 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05274cdb-49de-4144-85ca-3d46e1790dab" (UID: "05274cdb-49de-4144-85ca-3d46e1790dab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.274337 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "05274cdb-49de-4144-85ca-3d46e1790dab" (UID: "05274cdb-49de-4144-85ca-3d46e1790dab"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.288497 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-config" (OuterVolumeSpecName: "config") pod "05274cdb-49de-4144-85ca-3d46e1790dab" (UID: "05274cdb-49de-4144-85ca-3d46e1790dab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.374696 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.374735 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.374749 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.546259 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-98tgp"] Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.568741 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-98tgp"] Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.782677 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" path="/var/lib/kubelet/pods/05274cdb-49de-4144-85ca-3d46e1790dab/volumes" Jan 30 08:49:43 crc kubenswrapper[4758]: I0130 08:49:43.127109 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Jan 30 08:49:45 crc kubenswrapper[4758]: I0130 08:49:45.194447 4758 generic.go:334] "Generic (PLEG): container finished" podID="b166e095-ba6b-443f-8c0a-0e83bb698ccd" containerID="a887d904bf88f7531d24fd0632b3980599f13f39af4a57d591d1cab59676a5bb" exitCode=0 Jan 30 08:49:45 crc kubenswrapper[4758]: I0130 08:49:45.194526 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z5hrj" event={"ID":"b166e095-ba6b-443f-8c0a-0e83bb698ccd","Type":"ContainerDied","Data":"a887d904bf88f7531d24fd0632b3980599f13f39af4a57d591d1cab59676a5bb"} Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.235536 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.248759 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-596b6b9c4f-tqm7h" event={"ID":"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c","Type":"ContainerDied","Data":"4c28043680fea053f92ce9f17f24055029f33da223c2e4879e135172e9b327fa"} Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.248824 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.282307 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-scripts\") pod \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.282435 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-config-data\") pod \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.282464 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jltl6\" (UniqueName: \"kubernetes.io/projected/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-kube-api-access-jltl6\") pod \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.282509 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-logs\") pod \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.282632 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-horizon-secret-key\") pod \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.282995 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-logs" (OuterVolumeSpecName: "logs") pod "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c" (UID: "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.282866 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-scripts" (OuterVolumeSpecName: "scripts") pod "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c" (UID: "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.283094 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-config-data" (OuterVolumeSpecName: "config-data") pod "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c" (UID: "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.283458 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.283480 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.283493 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.296454 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c" (UID: "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.296706 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-kube-api-access-jltl6" (OuterVolumeSpecName: "kube-api-access-jltl6") pod "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c" (UID: "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c"). InnerVolumeSpecName "kube-api-access-jltl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.385167 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jltl6\" (UniqueName: \"kubernetes.io/projected/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-kube-api-access-jltl6\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.385200 4758 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.621492 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-596b6b9c4f-tqm7h"] Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.631694 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-596b6b9c4f-tqm7h"] Jan 30 08:49:50 crc kubenswrapper[4758]: E0130 08:49:50.817585 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 30 08:49:50 crc kubenswrapper[4758]: E0130 08:49:50.817776 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v55sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-c24lg_openstack(12ff21aa-edae-4f56-a2ea-be0deb2d84d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:49:50 crc kubenswrapper[4758]: E0130 08:49:50.819809 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-c24lg" podUID="12ff21aa-edae-4f56-a2ea-be0deb2d84d7" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.881383 4758 scope.go:117] "RemoveContainer" containerID="3b85b634aa2d9351742fa8b9b33587be915cb87e3c8564f41b0c1fb679584f2c" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.921575 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.929606 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.947703 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001407 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fq8m\" (UniqueName: \"kubernetes.io/projected/57c2a333-014d-4c26-b459-fd88537d21ad-kube-api-access-7fq8m\") pod \"57c2a333-014d-4c26-b459-fd88537d21ad\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001488 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a53958b-ee42-4dea-af5a-086e825f672e-horizon-secret-key\") pod \"6a53958b-ee42-4dea-af5a-086e825f672e\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001536 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-config-data\") pod \"6a53958b-ee42-4dea-af5a-086e825f672e\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001684 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-scripts\") pod \"57c2a333-014d-4c26-b459-fd88537d21ad\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001738 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-config-data\") pod \"57c2a333-014d-4c26-b459-fd88537d21ad\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001763 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/57c2a333-014d-4c26-b459-fd88537d21ad-horizon-secret-key\") pod \"57c2a333-014d-4c26-b459-fd88537d21ad\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001809 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c2a333-014d-4c26-b459-fd88537d21ad-logs\") pod \"57c2a333-014d-4c26-b459-fd88537d21ad\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001845 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqdnc\" (UniqueName: \"kubernetes.io/projected/6a53958b-ee42-4dea-af5a-086e825f672e-kube-api-access-bqdnc\") pod \"6a53958b-ee42-4dea-af5a-086e825f672e\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001886 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-config\") pod \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001914 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2rft\" (UniqueName: \"kubernetes.io/projected/b166e095-ba6b-443f-8c0a-0e83bb698ccd-kube-api-access-v2rft\") pod \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001940 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-combined-ca-bundle\") pod \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001990 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a53958b-ee42-4dea-af5a-086e825f672e-logs\") pod \"6a53958b-ee42-4dea-af5a-086e825f672e\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.002011 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-scripts\") pod \"6a53958b-ee42-4dea-af5a-086e825f672e\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.002168 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-scripts" (OuterVolumeSpecName: "scripts") pod "57c2a333-014d-4c26-b459-fd88537d21ad" (UID: "57c2a333-014d-4c26-b459-fd88537d21ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.002829 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-config-data" (OuterVolumeSpecName: "config-data") pod "6a53958b-ee42-4dea-af5a-086e825f672e" (UID: "6a53958b-ee42-4dea-af5a-086e825f672e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.003307 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.003349 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.003638 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57c2a333-014d-4c26-b459-fd88537d21ad-logs" (OuterVolumeSpecName: "logs") pod "57c2a333-014d-4c26-b459-fd88537d21ad" (UID: "57c2a333-014d-4c26-b459-fd88537d21ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.004252 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57c2a333-014d-4c26-b459-fd88537d21ad-kube-api-access-7fq8m" (OuterVolumeSpecName: "kube-api-access-7fq8m") pod "57c2a333-014d-4c26-b459-fd88537d21ad" (UID: "57c2a333-014d-4c26-b459-fd88537d21ad"). InnerVolumeSpecName "kube-api-access-7fq8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.004859 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a53958b-ee42-4dea-af5a-086e825f672e-logs" (OuterVolumeSpecName: "logs") pod "6a53958b-ee42-4dea-af5a-086e825f672e" (UID: "6a53958b-ee42-4dea-af5a-086e825f672e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.004900 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-config-data" (OuterVolumeSpecName: "config-data") pod "57c2a333-014d-4c26-b459-fd88537d21ad" (UID: "57c2a333-014d-4c26-b459-fd88537d21ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.004967 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a53958b-ee42-4dea-af5a-086e825f672e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6a53958b-ee42-4dea-af5a-086e825f672e" (UID: "6a53958b-ee42-4dea-af5a-086e825f672e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.005475 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-scripts" (OuterVolumeSpecName: "scripts") pod "6a53958b-ee42-4dea-af5a-086e825f672e" (UID: "6a53958b-ee42-4dea-af5a-086e825f672e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.006219 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b166e095-ba6b-443f-8c0a-0e83bb698ccd-kube-api-access-v2rft" (OuterVolumeSpecName: "kube-api-access-v2rft") pod "b166e095-ba6b-443f-8c0a-0e83bb698ccd" (UID: "b166e095-ba6b-443f-8c0a-0e83bb698ccd"). InnerVolumeSpecName "kube-api-access-v2rft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.006266 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a53958b-ee42-4dea-af5a-086e825f672e-kube-api-access-bqdnc" (OuterVolumeSpecName: "kube-api-access-bqdnc") pod "6a53958b-ee42-4dea-af5a-086e825f672e" (UID: "6a53958b-ee42-4dea-af5a-086e825f672e"). InnerVolumeSpecName "kube-api-access-bqdnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.006955 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c2a333-014d-4c26-b459-fd88537d21ad-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "57c2a333-014d-4c26-b459-fd88537d21ad" (UID: "57c2a333-014d-4c26-b459-fd88537d21ad"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.025387 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b166e095-ba6b-443f-8c0a-0e83bb698ccd" (UID: "b166e095-ba6b-443f-8c0a-0e83bb698ccd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.026823 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-config" (OuterVolumeSpecName: "config") pod "b166e095-ba6b-443f-8c0a-0e83bb698ccd" (UID: "b166e095-ba6b-443f-8c0a-0e83bb698ccd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104544 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqdnc\" (UniqueName: \"kubernetes.io/projected/6a53958b-ee42-4dea-af5a-086e825f672e-kube-api-access-bqdnc\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104582 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104592 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2rft\" (UniqueName: \"kubernetes.io/projected/b166e095-ba6b-443f-8c0a-0e83bb698ccd-kube-api-access-v2rft\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104602 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104610 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a53958b-ee42-4dea-af5a-086e825f672e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104619 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104628 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fq8m\" (UniqueName: \"kubernetes.io/projected/57c2a333-014d-4c26-b459-fd88537d21ad-kube-api-access-7fq8m\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104635 4758 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a53958b-ee42-4dea-af5a-086e825f672e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104643 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104652 4758 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/57c2a333-014d-4c26-b459-fd88537d21ad-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104659 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c2a333-014d-4c26-b459-fd88537d21ad-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.261869 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z5hrj" event={"ID":"b166e095-ba6b-443f-8c0a-0e83bb698ccd","Type":"ContainerDied","Data":"f883e7737939e0c54e462c2bd84b323261d5728c320e964b9fa1343f84ca779a"} Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.262141 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f883e7737939e0c54e462c2bd84b323261d5728c320e964b9fa1343f84ca779a" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.262186 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.263740 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57679b99fc-55gj9" event={"ID":"6a53958b-ee42-4dea-af5a-086e825f672e","Type":"ContainerDied","Data":"8b214db64125952356223045bfea0cadf16d3ed9ce368671c5c068061d4b7fe9"} Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.263833 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.271117 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.271724 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5755df7977-7khvs" event={"ID":"57c2a333-014d-4c26-b459-fd88537d21ad","Type":"ContainerDied","Data":"5f75c2b63d8a8e525916006c6cde4c59530db436c380e604f17c1c8a8b4e28b8"} Jan 30 08:49:51 crc kubenswrapper[4758]: E0130 08:49:51.278338 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-c24lg" podUID="12ff21aa-edae-4f56-a2ea-be0deb2d84d7" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.398025 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57679b99fc-55gj9"] Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.411854 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-57679b99fc-55gj9"] Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.430275 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5755df7977-7khvs"] Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.457804 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5755df7977-7khvs"] Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.778985 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57c2a333-014d-4c26-b459-fd88537d21ad" path="/var/lib/kubelet/pods/57c2a333-014d-4c26-b459-fd88537d21ad/volumes" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.779557 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a53958b-ee42-4dea-af5a-086e825f672e" path="/var/lib/kubelet/pods/6a53958b-ee42-4dea-af5a-086e825f672e/volumes" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.779989 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1a59ac3-5eae-4e76-a7b0-5e3d4395515c" path="/var/lib/kubelet/pods/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c/volumes" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.247463 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-d5b7q"] Jan 30 08:49:52 crc kubenswrapper[4758]: E0130 08:49:52.248115 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" containerName="init" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.248212 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" containerName="init" Jan 30 08:49:52 crc kubenswrapper[4758]: E0130 08:49:52.248282 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b166e095-ba6b-443f-8c0a-0e83bb698ccd" containerName="neutron-db-sync" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.248350 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b166e095-ba6b-443f-8c0a-0e83bb698ccd" containerName="neutron-db-sync" Jan 30 08:49:52 crc kubenswrapper[4758]: E0130 08:49:52.248450 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" containerName="dnsmasq-dns" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.248522 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" containerName="dnsmasq-dns" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.248765 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" containerName="dnsmasq-dns" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.248837 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b166e095-ba6b-443f-8c0a-0e83bb698ccd" containerName="neutron-db-sync" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.249720 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.271005 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-d5b7q"] Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.338149 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-sb\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.338404 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-nb\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.338508 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drwmm\" (UniqueName: \"kubernetes.io/projected/062c9394-cb5b-4768-b71f-2965c61905b8-kube-api-access-drwmm\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.338614 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-dns-svc\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.338722 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-config\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.428795 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-645778c498-xt8kb"] Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.444639 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.440595 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-sb\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.445309 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-nb\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.445329 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drwmm\" (UniqueName: \"kubernetes.io/projected/062c9394-cb5b-4768-b71f-2965c61905b8-kube-api-access-drwmm\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.445429 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-dns-svc\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.445453 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-config\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.446610 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-config\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.441393 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-sb\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.447798 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-nb\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.448404 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-dns-svc\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.460725 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-t9t64" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.460964 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.461099 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.461249 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.476768 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-645778c498-xt8kb"] Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.491488 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drwmm\" (UniqueName: \"kubernetes.io/projected/062c9394-cb5b-4768-b71f-2965c61905b8-kube-api-access-drwmm\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.551167 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-combined-ca-bundle\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.551222 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-config\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.551316 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-httpd-config\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.551335 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-ovndb-tls-certs\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.551401 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdj7z\" (UniqueName: \"kubernetes.io/projected/b9e036dd-d20e-4488-8354-7b1079bc8113-kube-api-access-qdj7z\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.576744 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.652636 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdj7z\" (UniqueName: \"kubernetes.io/projected/b9e036dd-d20e-4488-8354-7b1079bc8113-kube-api-access-qdj7z\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.652697 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-combined-ca-bundle\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.652726 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-config\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.652820 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-httpd-config\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.652841 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-ovndb-tls-certs\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.658761 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-ovndb-tls-certs\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.658979 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-httpd-config\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.658883 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-combined-ca-bundle\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.677060 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdj7z\" (UniqueName: \"kubernetes.io/projected/b9e036dd-d20e-4488-8354-7b1079bc8113-kube-api-access-qdj7z\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.678514 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-config\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.842731 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.404512 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6bcffb56d9-w524k"] Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.412289 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.415551 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.418651 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.436215 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bcffb56d9-w524k"] Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.501353 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-config\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.501427 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkxtw\" (UniqueName: \"kubernetes.io/projected/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-kube-api-access-rkxtw\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.501471 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-ovndb-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.501536 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-httpd-config\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.501571 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-public-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.501729 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-internal-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.501763 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-combined-ca-bundle\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.602987 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-ovndb-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.603103 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-httpd-config\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.603135 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-public-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.603171 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-internal-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.603203 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-combined-ca-bundle\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.603281 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-config\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.603324 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkxtw\" (UniqueName: \"kubernetes.io/projected/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-kube-api-access-rkxtw\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.611191 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-internal-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.612763 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-public-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.612925 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-config\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.613941 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-httpd-config\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.625264 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-combined-ca-bundle\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.629570 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-ovndb-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.633898 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkxtw\" (UniqueName: \"kubernetes.io/projected/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-kube-api-access-rkxtw\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.743275 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:57 crc kubenswrapper[4758]: E0130 08:49:57.241221 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 30 08:49:57 crc kubenswrapper[4758]: E0130 08:49:57.242173 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rvklh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-x2v6d_openstack(25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:49:57 crc kubenswrapper[4758]: E0130 08:49:57.243784 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-x2v6d" podUID="25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" Jan 30 08:49:57 crc kubenswrapper[4758]: I0130 08:49:57.407534 4758 scope.go:117] "RemoveContainer" containerID="fbe24d5e6be67695cb5ddcaebff818aae946840f68728ace533f14d150ec3201" Jan 30 08:49:57 crc kubenswrapper[4758]: E0130 08:49:57.432265 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-x2v6d" podUID="25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" Jan 30 08:49:57 crc kubenswrapper[4758]: I0130 08:49:57.644704 4758 scope.go:117] "RemoveContainer" containerID="04574457799aeff7875482dc2fc9ef4ba7fb15bf31c64fb74962ff0458496cd3" Jan 30 08:49:57 crc kubenswrapper[4758]: I0130 08:49:57.691305 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:57 crc kubenswrapper[4758]: W0130 08:49:57.707516 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5af263c7_b4ef_4cd9_bf61_2caa6ce1a43f.slice/crio-09ea2b79cbc0aef888ba0674456b14d2b50865632ce01fc1d4296c4eec420a06 WatchSource:0}: Error finding container 09ea2b79cbc0aef888ba0674456b14d2b50865632ce01fc1d4296c4eec420a06: Status 404 returned error can't find the container with id 09ea2b79cbc0aef888ba0674456b14d2b50865632ce01fc1d4296c4eec420a06 Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.025229 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.148976 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-d5b7q"] Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.282709 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-slg8b"] Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.524434 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.539618 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-slg8b" event={"ID":"419e16c4-297d-490a-8fd3-6d365e20f5f2","Type":"ContainerStarted","Data":"d8412f7cc1a8a2ecba1258efe870b73eea12fefd558cd99ff2489cbead4bab2c"} Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.544526 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" event={"ID":"062c9394-cb5b-4768-b71f-2965c61905b8","Type":"ContainerStarted","Data":"b5a75a63374b225cb63a7afd682c9ae7eaaa46ad8ba68384e44dabbcfac3a2df"} Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.555448 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ec2d231-4b26-4cc9-a09d-9091153da8a9","Type":"ContainerStarted","Data":"a7359b09c33ee03bcd96bf4b4d5dd4f4f882e30e5f48059a01c80017926d9869"} Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.563513 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f","Type":"ContainerStarted","Data":"09ea2b79cbc0aef888ba0674456b14d2b50865632ce01fc1d4296c4eec420a06"} Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.582674 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lqbmm" event={"ID":"11f7c236-867a-465b-9514-de6a765b312b","Type":"ContainerStarted","Data":"23862bcdc0458af24e0606a4eedaf48106778403061437cff21803ebeee27a94"} Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.628976 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-645778c498-xt8kb"] Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.635481 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf698bb7b-gp87v" event={"ID":"97906db2-3b2d-44ec-af77-d3edf75b7f76","Type":"ContainerStarted","Data":"dc846a1b60c49ae0094d7935fda041f5ad64e15973405e5847dcfee6d2733586"} Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.644281 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-lqbmm" podStartSLOduration=9.977438305 podStartE2EDuration="42.644254487s" podCreationTimestamp="2026-01-30 08:49:16 +0000 UTC" firstStartedPulling="2026-01-30 08:49:18.193645759 +0000 UTC m=+1163.165957300" lastFinishedPulling="2026-01-30 08:49:50.860461931 +0000 UTC m=+1195.832773482" observedRunningTime="2026-01-30 08:49:58.607341183 +0000 UTC m=+1203.579652744" watchObservedRunningTime="2026-01-30 08:49:58.644254487 +0000 UTC m=+1203.616566038" Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.434609 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bcffb56d9-w524k"] Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.672709 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f","Type":"ContainerStarted","Data":"76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.678887 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf698bb7b-gp87v" event={"ID":"97906db2-3b2d-44ec-af77-d3edf75b7f76","Type":"ContainerStarted","Data":"33c5db16903a0ea24fc43801695a658eca16a53583c2497012de7812e4d3cb27"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.694437 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerStarted","Data":"5ac24c2e0d10a94520a56718b5cb7e279d3222a4776ef2910ce318c3d652e18a"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.695021 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerStarted","Data":"928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.705395 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645778c498-xt8kb" event={"ID":"b9e036dd-d20e-4488-8354-7b1079bc8113","Type":"ContainerStarted","Data":"48f1de05547b3f44a9bb9e50923cbab0f98f39c1687f67b0d20d6bb8117c5d17"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.706090 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645778c498-xt8kb" event={"ID":"b9e036dd-d20e-4488-8354-7b1079bc8113","Type":"ContainerStarted","Data":"d2f284d1f78a5a02cc7d33f74c6c84ab5179e1ed0d94292246f146a3dbb86c85"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.706137 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645778c498-xt8kb" event={"ID":"b9e036dd-d20e-4488-8354-7b1079bc8113","Type":"ContainerStarted","Data":"e0c5305909e72f305e168967cbd28a761d5403564b6070027fcc64369e555331"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.706708 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5cf698bb7b-gp87v" podStartSLOduration=4.677959663 podStartE2EDuration="34.70668663s" podCreationTimestamp="2026-01-30 08:49:25 +0000 UTC" firstStartedPulling="2026-01-30 08:49:27.149890752 +0000 UTC m=+1172.122202303" lastFinishedPulling="2026-01-30 08:49:57.178617719 +0000 UTC m=+1202.150929270" observedRunningTime="2026-01-30 08:49:59.706477043 +0000 UTC m=+1204.678788614" watchObservedRunningTime="2026-01-30 08:49:59.70668663 +0000 UTC m=+1204.678998181" Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.706993 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.720521 4758 generic.go:334] "Generic (PLEG): container finished" podID="062c9394-cb5b-4768-b71f-2965c61905b8" containerID="c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035" exitCode=0 Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.720613 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" event={"ID":"062c9394-cb5b-4768-b71f-2965c61905b8","Type":"ContainerDied","Data":"c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.724584 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bcffb56d9-w524k" event={"ID":"bf67f1ce-88a8-4255-b067-6f6a001ec6b3","Type":"ContainerStarted","Data":"7e2b0cfeefa2f941937553b5eb918279c40a00098408904e6b1021d27662c44b"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.727615 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ec2d231-4b26-4cc9-a09d-9091153da8a9","Type":"ContainerStarted","Data":"c5be373def2d5d6ba0348da3d6e663da2b77ce86d4b3d39d71e5ba9a890af4be"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.758060 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerStarted","Data":"714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.759164 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-645778c498-xt8kb" podStartSLOduration=7.759146756 podStartE2EDuration="7.759146756s" podCreationTimestamp="2026-01-30 08:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:59.750542254 +0000 UTC m=+1204.722853825" watchObservedRunningTime="2026-01-30 08:49:59.759146756 +0000 UTC m=+1204.731458307" Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.803009 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-slg8b" event={"ID":"419e16c4-297d-490a-8fd3-6d365e20f5f2","Type":"ContainerStarted","Data":"c95e7220fa0d1725e338904a7e29c2f7e1c50ca5a270bfbf0b0819abddbe5c04"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.846685 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-76fc974bd8-4mnvj" podStartSLOduration=4.835761378 podStartE2EDuration="34.846658828s" podCreationTimestamp="2026-01-30 08:49:25 +0000 UTC" firstStartedPulling="2026-01-30 08:49:27.411581399 +0000 UTC m=+1172.383892950" lastFinishedPulling="2026-01-30 08:49:57.422478849 +0000 UTC m=+1202.394790400" observedRunningTime="2026-01-30 08:49:59.781858508 +0000 UTC m=+1204.754170079" watchObservedRunningTime="2026-01-30 08:49:59.846658828 +0000 UTC m=+1204.818970379" Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.859963 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-slg8b" podStartSLOduration=24.85993894 podStartE2EDuration="24.85993894s" podCreationTimestamp="2026-01-30 08:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:59.855599672 +0000 UTC m=+1204.827911223" watchObservedRunningTime="2026-01-30 08:49:59.85993894 +0000 UTC m=+1204.832250491" Jan 30 08:50:00 crc kubenswrapper[4758]: I0130 08:50:00.815360 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" event={"ID":"062c9394-cb5b-4768-b71f-2965c61905b8","Type":"ContainerStarted","Data":"f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b"} Jan 30 08:50:00 crc kubenswrapper[4758]: I0130 08:50:00.817184 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:50:00 crc kubenswrapper[4758]: I0130 08:50:00.828665 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bcffb56d9-w524k" event={"ID":"bf67f1ce-88a8-4255-b067-6f6a001ec6b3","Type":"ContainerStarted","Data":"0344ce2026c43ed6ac12af127c29eb8e1de75869c4b56344c9c8d96774d64c6b"} Jan 30 08:50:00 crc kubenswrapper[4758]: I0130 08:50:00.837915 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f","Type":"ContainerStarted","Data":"c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea"} Jan 30 08:50:00 crc kubenswrapper[4758]: I0130 08:50:00.849866 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" podStartSLOduration=8.849842989 podStartE2EDuration="8.849842989s" podCreationTimestamp="2026-01-30 08:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:00.839567452 +0000 UTC m=+1205.811879013" watchObservedRunningTime="2026-01-30 08:50:00.849842989 +0000 UTC m=+1205.822154540" Jan 30 08:50:00 crc kubenswrapper[4758]: I0130 08:50:00.873978 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=33.873936744 podStartE2EDuration="33.873936744s" podCreationTimestamp="2026-01-30 08:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:00.866855489 +0000 UTC m=+1205.839167060" watchObservedRunningTime="2026-01-30 08:50:00.873936744 +0000 UTC m=+1205.846248305" Jan 30 08:50:01 crc kubenswrapper[4758]: I0130 08:50:01.886141 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ec2d231-4b26-4cc9-a09d-9091153da8a9","Type":"ContainerStarted","Data":"133e57354834ba4048e5d9ae39382e69423ed21832e9edcfe49d508cca9e97e3"} Jan 30 08:50:01 crc kubenswrapper[4758]: I0130 08:50:01.971600 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=27.971583527 podStartE2EDuration="27.971583527s" podCreationTimestamp="2026-01-30 08:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:01.945512409 +0000 UTC m=+1206.917823960" watchObservedRunningTime="2026-01-30 08:50:01.971583527 +0000 UTC m=+1206.943895078" Jan 30 08:50:02 crc kubenswrapper[4758]: I0130 08:50:02.918628 4758 generic.go:334] "Generic (PLEG): container finished" podID="11f7c236-867a-465b-9514-de6a765b312b" containerID="23862bcdc0458af24e0606a4eedaf48106778403061437cff21803ebeee27a94" exitCode=0 Jan 30 08:50:02 crc kubenswrapper[4758]: I0130 08:50:02.919006 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lqbmm" event={"ID":"11f7c236-867a-465b-9514-de6a765b312b","Type":"ContainerDied","Data":"23862bcdc0458af24e0606a4eedaf48106778403061437cff21803ebeee27a94"} Jan 30 08:50:02 crc kubenswrapper[4758]: I0130 08:50:02.931499 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bcffb56d9-w524k" event={"ID":"bf67f1ce-88a8-4255-b067-6f6a001ec6b3","Type":"ContainerStarted","Data":"5417adcb1a97f55a1325215740e4c7309f2a09d3b1b9b49830ac1bd56cdf3dd4"} Jan 30 08:50:02 crc kubenswrapper[4758]: I0130 08:50:02.932421 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:50:02 crc kubenswrapper[4758]: I0130 08:50:02.951068 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerStarted","Data":"35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8"} Jan 30 08:50:02 crc kubenswrapper[4758]: I0130 08:50:02.978607 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6bcffb56d9-w524k" podStartSLOduration=8.978587289 podStartE2EDuration="8.978587289s" podCreationTimestamp="2026-01-30 08:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:02.975933014 +0000 UTC m=+1207.948244575" watchObservedRunningTime="2026-01-30 08:50:02.978587289 +0000 UTC m=+1207.950898840" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.410598 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lqbmm" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.528028 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-scripts\") pod \"11f7c236-867a-465b-9514-de6a765b312b\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.528493 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-config-data\") pod \"11f7c236-867a-465b-9514-de6a765b312b\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.528597 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-combined-ca-bundle\") pod \"11f7c236-867a-465b-9514-de6a765b312b\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.528622 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmcjv\" (UniqueName: \"kubernetes.io/projected/11f7c236-867a-465b-9514-de6a765b312b-kube-api-access-wmcjv\") pod \"11f7c236-867a-465b-9514-de6a765b312b\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.528668 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11f7c236-867a-465b-9514-de6a765b312b-logs\") pod \"11f7c236-867a-465b-9514-de6a765b312b\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.529492 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f7c236-867a-465b-9514-de6a765b312b-logs" (OuterVolumeSpecName: "logs") pod "11f7c236-867a-465b-9514-de6a765b312b" (UID: "11f7c236-867a-465b-9514-de6a765b312b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.539175 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-scripts" (OuterVolumeSpecName: "scripts") pod "11f7c236-867a-465b-9514-de6a765b312b" (UID: "11f7c236-867a-465b-9514-de6a765b312b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.543182 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f7c236-867a-465b-9514-de6a765b312b-kube-api-access-wmcjv" (OuterVolumeSpecName: "kube-api-access-wmcjv") pod "11f7c236-867a-465b-9514-de6a765b312b" (UID: "11f7c236-867a-465b-9514-de6a765b312b"). InnerVolumeSpecName "kube-api-access-wmcjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.566088 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11f7c236-867a-465b-9514-de6a765b312b" (UID: "11f7c236-867a-465b-9514-de6a765b312b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.566493 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-config-data" (OuterVolumeSpecName: "config-data") pod "11f7c236-867a-465b-9514-de6a765b312b" (UID: "11f7c236-867a-465b-9514-de6a765b312b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.631509 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.631538 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.631569 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmcjv\" (UniqueName: \"kubernetes.io/projected/11f7c236-867a-465b-9514-de6a765b312b-kube-api-access-wmcjv\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.631579 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11f7c236-867a-465b-9514-de6a765b312b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.631589 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.803936 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.803991 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.804007 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.804016 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.864022 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.865817 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:04.998190 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lqbmm" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:04.998016 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lqbmm" event={"ID":"11f7c236-867a-465b-9514-de6a765b312b","Type":"ContainerDied","Data":"846fcd3c2e04f783d1345c62f282c95a03f7f69db8ece0d61ccbe690d3fc4153"} Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.000678 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="846fcd3c2e04f783d1345c62f282c95a03f7f69db8ece0d61ccbe690d3fc4153" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.092776 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6bdfdc4b-wwqnt"] Jan 30 08:50:05 crc kubenswrapper[4758]: E0130 08:50:05.094697 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f7c236-867a-465b-9514-de6a765b312b" containerName="placement-db-sync" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.094739 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f7c236-867a-465b-9514-de6a765b312b" containerName="placement-db-sync" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.095017 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f7c236-867a-465b-9514-de6a765b312b" containerName="placement-db-sync" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.096591 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.102171 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.102319 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.102492 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.102834 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-tdg2m" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.103018 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.139485 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bdfdc4b-wwqnt"] Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.252731 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-combined-ca-bundle\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.252803 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-scripts\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.252838 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8f8l\" (UniqueName: \"kubernetes.io/projected/82fd1f36-9f4f-441f-959d-e2eddc79c99b-kube-api-access-l8f8l\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.252861 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82fd1f36-9f4f-441f-959d-e2eddc79c99b-logs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.253207 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-config-data\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.253328 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-internal-tls-certs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.253547 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-public-tls-certs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.355343 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-public-tls-certs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.355450 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-combined-ca-bundle\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.355484 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-scripts\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.355522 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8f8l\" (UniqueName: \"kubernetes.io/projected/82fd1f36-9f4f-441f-959d-e2eddc79c99b-kube-api-access-l8f8l\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.355550 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82fd1f36-9f4f-441f-959d-e2eddc79c99b-logs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.355624 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-config-data\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.355657 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-internal-tls-certs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.360111 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-internal-tls-certs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.360465 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82fd1f36-9f4f-441f-959d-e2eddc79c99b-logs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.361247 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-public-tls-certs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.362116 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-combined-ca-bundle\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.364244 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-scripts\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.366648 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-config-data\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.381738 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8f8l\" (UniqueName: \"kubernetes.io/projected/82fd1f36-9f4f-441f-959d-e2eddc79c99b-kube-api-access-l8f8l\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.430088 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:06 crc kubenswrapper[4758]: I0130 08:50:06.009402 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:50:06 crc kubenswrapper[4758]: I0130 08:50:06.009464 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:50:06 crc kubenswrapper[4758]: I0130 08:50:06.043140 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:50:06 crc kubenswrapper[4758]: I0130 08:50:06.043223 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.022716 4758 generic.go:334] "Generic (PLEG): container finished" podID="419e16c4-297d-490a-8fd3-6d365e20f5f2" containerID="c95e7220fa0d1725e338904a7e29c2f7e1c50ca5a270bfbf0b0819abddbe5c04" exitCode=0 Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.022817 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-slg8b" event={"ID":"419e16c4-297d-490a-8fd3-6d365e20f5f2","Type":"ContainerDied","Data":"c95e7220fa0d1725e338904a7e29c2f7e1c50ca5a270bfbf0b0819abddbe5c04"} Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.578554 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.666839 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-7trbn"] Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.667094 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56798b757f-7trbn" podUID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" containerName="dnsmasq-dns" containerID="cri-o://f78641dc0545684ea84c19e7528fb1e66fc58f4b4489db98333bbe75698bd8be" gracePeriod=10 Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.669930 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.669978 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.757323 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.766283 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 08:50:08 crc kubenswrapper[4758]: I0130 08:50:08.043378 4758 generic.go:334] "Generic (PLEG): container finished" podID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" containerID="f78641dc0545684ea84c19e7528fb1e66fc58f4b4489db98333bbe75698bd8be" exitCode=0 Jan 30 08:50:08 crc kubenswrapper[4758]: I0130 08:50:08.043632 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-7trbn" event={"ID":"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06","Type":"ContainerDied","Data":"f78641dc0545684ea84c19e7528fb1e66fc58f4b4489db98333bbe75698bd8be"} Jan 30 08:50:08 crc kubenswrapper[4758]: I0130 08:50:08.044840 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 08:50:08 crc kubenswrapper[4758]: I0130 08:50:08.044863 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 08:50:10 crc kubenswrapper[4758]: I0130 08:50:10.056087 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:50:10 crc kubenswrapper[4758]: I0130 08:50:10.056646 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:50:10 crc kubenswrapper[4758]: I0130 08:50:10.959743 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.123308 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-credential-keys\") pod \"419e16c4-297d-490a-8fd3-6d365e20f5f2\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.123438 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln7jd\" (UniqueName: \"kubernetes.io/projected/419e16c4-297d-490a-8fd3-6d365e20f5f2-kube-api-access-ln7jd\") pod \"419e16c4-297d-490a-8fd3-6d365e20f5f2\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.123587 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-scripts\") pod \"419e16c4-297d-490a-8fd3-6d365e20f5f2\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.131992 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-fernet-keys\") pod \"419e16c4-297d-490a-8fd3-6d365e20f5f2\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.132591 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-combined-ca-bundle\") pod \"419e16c4-297d-490a-8fd3-6d365e20f5f2\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.132793 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-config-data\") pod \"419e16c4-297d-490a-8fd3-6d365e20f5f2\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.153010 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/419e16c4-297d-490a-8fd3-6d365e20f5f2-kube-api-access-ln7jd" (OuterVolumeSpecName: "kube-api-access-ln7jd") pod "419e16c4-297d-490a-8fd3-6d365e20f5f2" (UID: "419e16c4-297d-490a-8fd3-6d365e20f5f2"). InnerVolumeSpecName "kube-api-access-ln7jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.153500 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "419e16c4-297d-490a-8fd3-6d365e20f5f2" (UID: "419e16c4-297d-490a-8fd3-6d365e20f5f2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.162730 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-scripts" (OuterVolumeSpecName: "scripts") pod "419e16c4-297d-490a-8fd3-6d365e20f5f2" (UID: "419e16c4-297d-490a-8fd3-6d365e20f5f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.180018 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "419e16c4-297d-490a-8fd3-6d365e20f5f2" (UID: "419e16c4-297d-490a-8fd3-6d365e20f5f2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.182520 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-slg8b" event={"ID":"419e16c4-297d-490a-8fd3-6d365e20f5f2","Type":"ContainerDied","Data":"d8412f7cc1a8a2ecba1258efe870b73eea12fefd558cd99ff2489cbead4bab2c"} Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.182600 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8412f7cc1a8a2ecba1258efe870b73eea12fefd558cd99ff2489cbead4bab2c" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.182663 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.218156 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-config-data" (OuterVolumeSpecName: "config-data") pod "419e16c4-297d-490a-8fd3-6d365e20f5f2" (UID: "419e16c4-297d-490a-8fd3-6d365e20f5f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.237289 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.237356 4758 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.237368 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln7jd\" (UniqueName: \"kubernetes.io/projected/419e16c4-297d-490a-8fd3-6d365e20f5f2-kube-api-access-ln7jd\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.237376 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.237386 4758 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.246558 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "419e16c4-297d-490a-8fd3-6d365e20f5f2" (UID: "419e16c4-297d-490a-8fd3-6d365e20f5f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.339797 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.394028 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.552539 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-nb\") pod \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.553115 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-config\") pod \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.553195 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lrbj\" (UniqueName: \"kubernetes.io/projected/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-kube-api-access-9lrbj\") pod \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.553262 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-dns-svc\") pod \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.553292 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-sb\") pod \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.576768 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-kube-api-access-9lrbj" (OuterVolumeSpecName: "kube-api-access-9lrbj") pod "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" (UID: "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06"). InnerVolumeSpecName "kube-api-access-9lrbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.599266 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bdfdc4b-wwqnt"] Jan 30 08:50:11 crc kubenswrapper[4758]: W0130 08:50:11.653420 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82fd1f36_9f4f_441f_959d_e2eddc79c99b.slice/crio-d71a899a17b20de9e44856675ced999b7836a65883ac57c9f3c9ad5f73066287 WatchSource:0}: Error finding container d71a899a17b20de9e44856675ced999b7836a65883ac57c9f3c9ad5f73066287: Status 404 returned error can't find the container with id d71a899a17b20de9e44856675ced999b7836a65883ac57c9f3c9ad5f73066287 Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.655071 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lrbj\" (UniqueName: \"kubernetes.io/projected/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-kube-api-access-9lrbj\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.708401 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.708553 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.710439 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.747598 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.747718 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.817849 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-config" (OuterVolumeSpecName: "config") pod "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" (UID: "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.856618 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" (UID: "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.862440 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" (UID: "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.863666 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.863792 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.863803 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.907295 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" (UID: "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.972072 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.210221 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerStarted","Data":"c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96"} Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.224728 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c24lg" event={"ID":"12ff21aa-edae-4f56-a2ea-be0deb2d84d7","Type":"ContainerStarted","Data":"ce042854a2f3fd058ffc8182aa4289802d19b49b10f8800c0daceab9556857f3"} Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.250244 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-7trbn" event={"ID":"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06","Type":"ContainerDied","Data":"21f1c5561b1c8dc8979b30b5fd4e9ebe5b905ba1454066061809e405ff87c8bc"} Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.250298 4758 scope.go:117] "RemoveContainer" containerID="f78641dc0545684ea84c19e7528fb1e66fc58f4b4489db98333bbe75698bd8be" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.250425 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.252475 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-546cd7df57-wnwgz"] Jan 30 08:50:12 crc kubenswrapper[4758]: E0130 08:50:12.253307 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" containerName="init" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.253325 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" containerName="init" Jan 30 08:50:12 crc kubenswrapper[4758]: E0130 08:50:12.253339 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="419e16c4-297d-490a-8fd3-6d365e20f5f2" containerName="keystone-bootstrap" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.253347 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="419e16c4-297d-490a-8fd3-6d365e20f5f2" containerName="keystone-bootstrap" Jan 30 08:50:12 crc kubenswrapper[4758]: E0130 08:50:12.253364 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" containerName="dnsmasq-dns" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.253393 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" containerName="dnsmasq-dns" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.253587 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="419e16c4-297d-490a-8fd3-6d365e20f5f2" containerName="keystone-bootstrap" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.253614 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" containerName="dnsmasq-dns" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.264148 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.282368 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bdfdc4b-wwqnt" event={"ID":"82fd1f36-9f4f-441f-959d-e2eddc79c99b","Type":"ContainerStarted","Data":"d71a899a17b20de9e44856675ced999b7836a65883ac57c9f3c9ad5f73066287"} Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.289491 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.290230 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.290427 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w5f5m" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.290561 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.295280 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.296678 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.297923 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-fernet-keys\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.298028 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxkmf\" (UniqueName: \"kubernetes.io/projected/283998cc-90b9-49fb-91f5-7cfd514603d0-kube-api-access-pxkmf\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.298130 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-internal-tls-certs\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.298155 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-config-data\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.298252 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-scripts\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.298286 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-credential-keys\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.298328 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-public-tls-certs\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.298349 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-combined-ca-bundle\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.366312 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-c24lg" podStartSLOduration=6.646770495 podStartE2EDuration="56.366290024s" podCreationTimestamp="2026-01-30 08:49:16 +0000 UTC" firstStartedPulling="2026-01-30 08:49:21.251204505 +0000 UTC m=+1166.223516056" lastFinishedPulling="2026-01-30 08:50:10.970724034 +0000 UTC m=+1215.943035585" observedRunningTime="2026-01-30 08:50:12.263761836 +0000 UTC m=+1217.236073407" watchObservedRunningTime="2026-01-30 08:50:12.366290024 +0000 UTC m=+1217.338601575" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.404468 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxkmf\" (UniqueName: \"kubernetes.io/projected/283998cc-90b9-49fb-91f5-7cfd514603d0-kube-api-access-pxkmf\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.404538 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-internal-tls-certs\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.404564 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-config-data\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.404628 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-scripts\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.404664 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-credential-keys\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.404707 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-public-tls-certs\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.404723 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-combined-ca-bundle\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.404776 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-fernet-keys\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.411482 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-scripts\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.414488 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-internal-tls-certs\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.416279 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-public-tls-certs\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.450641 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-credential-keys\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.456779 4758 scope.go:117] "RemoveContainer" containerID="b3ea3c454168822443d7f32c74a100f615de44f286974f28516fa17d776a4519" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.467026 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.468758 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxkmf\" (UniqueName: \"kubernetes.io/projected/283998cc-90b9-49fb-91f5-7cfd514603d0-kube-api-access-pxkmf\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.490587 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-config-data\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.491960 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-fernet-keys\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.539318 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-546cd7df57-wnwgz"] Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.550178 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-combined-ca-bundle\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.585744 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-7trbn"] Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.605355 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-7trbn"] Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.627825 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.311657 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x2v6d" event={"ID":"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3","Type":"ContainerStarted","Data":"c783ba41ba97d546267a2a7d551d57bf2d634c9498d0ec5c9365e656091583d9"} Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.326490 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-546cd7df57-wnwgz"] Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.344224 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bdfdc4b-wwqnt" event={"ID":"82fd1f36-9f4f-441f-959d-e2eddc79c99b","Type":"ContainerStarted","Data":"21679f960e51878ddb064b4d7ad7fbc76c5dcdf3143e55291739f0ba963b83c7"} Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.344262 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bdfdc4b-wwqnt" event={"ID":"82fd1f36-9f4f-441f-959d-e2eddc79c99b","Type":"ContainerStarted","Data":"ca13ba52ecd58780c742a36133f637fb5b60b222e69dea6e09fc29cd35f5fd19"} Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.345280 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.345320 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.347153 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-x2v6d" podStartSLOduration=7.275150216 podStartE2EDuration="57.347136335s" podCreationTimestamp="2026-01-30 08:49:16 +0000 UTC" firstStartedPulling="2026-01-30 08:49:21.269911401 +0000 UTC m=+1166.242222952" lastFinishedPulling="2026-01-30 08:50:11.34189752 +0000 UTC m=+1216.314209071" observedRunningTime="2026-01-30 08:50:13.336807637 +0000 UTC m=+1218.309119198" watchObservedRunningTime="2026-01-30 08:50:13.347136335 +0000 UTC m=+1218.319447886" Jan 30 08:50:13 crc kubenswrapper[4758]: W0130 08:50:13.367081 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod283998cc_90b9_49fb_91f5_7cfd514603d0.slice/crio-f20d1a36680eb22ba48f6cd821710d9a4387937b78414002cb116675e1d2c1ab WatchSource:0}: Error finding container f20d1a36680eb22ba48f6cd821710d9a4387937b78414002cb116675e1d2c1ab: Status 404 returned error can't find the container with id f20d1a36680eb22ba48f6cd821710d9a4387937b78414002cb116675e1d2c1ab Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.377929 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6bdfdc4b-wwqnt" podStartSLOduration=8.377909743 podStartE2EDuration="8.377909743s" podCreationTimestamp="2026-01-30 08:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:13.376634852 +0000 UTC m=+1218.348946413" watchObservedRunningTime="2026-01-30 08:50:13.377909743 +0000 UTC m=+1218.350221294" Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.781997 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" path="/var/lib/kubelet/pods/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06/volumes" Jan 30 08:50:14 crc kubenswrapper[4758]: I0130 08:50:14.363434 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-546cd7df57-wnwgz" event={"ID":"283998cc-90b9-49fb-91f5-7cfd514603d0","Type":"ContainerStarted","Data":"70e951145b350707e00f5be9c396d9f13b7c6cd4b7a47ae4d6d6493a86522538"} Jan 30 08:50:14 crc kubenswrapper[4758]: I0130 08:50:14.363471 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-546cd7df57-wnwgz" event={"ID":"283998cc-90b9-49fb-91f5-7cfd514603d0","Type":"ContainerStarted","Data":"f20d1a36680eb22ba48f6cd821710d9a4387937b78414002cb116675e1d2c1ab"} Jan 30 08:50:14 crc kubenswrapper[4758]: I0130 08:50:14.363496 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:14 crc kubenswrapper[4758]: I0130 08:50:14.403444 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-546cd7df57-wnwgz" podStartSLOduration=2.403417434 podStartE2EDuration="2.403417434s" podCreationTimestamp="2026-01-30 08:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:14.398737464 +0000 UTC m=+1219.371049015" watchObservedRunningTime="2026-01-30 08:50:14.403417434 +0000 UTC m=+1219.375728985" Jan 30 08:50:16 crc kubenswrapper[4758]: I0130 08:50:16.011830 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:50:16 crc kubenswrapper[4758]: I0130 08:50:16.045796 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 08:50:17 crc kubenswrapper[4758]: E0130 08:50:17.027129 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="f978baf9-b7c0-4d25-8bca-e95a018ba2af" Jan 30 08:50:17 crc kubenswrapper[4758]: I0130 08:50:17.403558 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 08:50:19 crc kubenswrapper[4758]: I0130 08:50:19.433834 4758 generic.go:334] "Generic (PLEG): container finished" podID="12ff21aa-edae-4f56-a2ea-be0deb2d84d7" containerID="ce042854a2f3fd058ffc8182aa4289802d19b49b10f8800c0daceab9556857f3" exitCode=0 Jan 30 08:50:19 crc kubenswrapper[4758]: I0130 08:50:19.434412 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c24lg" event={"ID":"12ff21aa-edae-4f56-a2ea-be0deb2d84d7","Type":"ContainerDied","Data":"ce042854a2f3fd058ffc8182aa4289802d19b49b10f8800c0daceab9556857f3"} Jan 30 08:50:22 crc kubenswrapper[4758]: I0130 08:50:22.051507 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:50:22 crc kubenswrapper[4758]: E0130 08:50:22.051767 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:50:22 crc kubenswrapper[4758]: E0130 08:50:22.052187 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:50:22 crc kubenswrapper[4758]: E0130 08:50:22.052272 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:52:24.052242318 +0000 UTC m=+1349.024553869 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:50:22 crc kubenswrapper[4758]: I0130 08:50:22.851655 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.159600 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6bcffb56d9-w524k"] Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.163782 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6bcffb56d9-w524k" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-api" containerID="cri-o://0344ce2026c43ed6ac12af127c29eb8e1de75869c4b56344c9c8d96774d64c6b" gracePeriod=30 Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.164189 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6bcffb56d9-w524k" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-httpd" containerID="cri-o://5417adcb1a97f55a1325215740e4c7309f2a09d3b1b9b49830ac1bd56cdf3dd4" gracePeriod=30 Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.188141 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.235225 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6f9c8c6ff5-f2sb7"] Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.237765 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.267494 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f9c8c6ff5-f2sb7"] Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.399644 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvtpz\" (UniqueName: \"kubernetes.io/projected/0bacb926-f58c-4c06-870a-633b7a3795c5-kube-api-access-xvtpz\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.399741 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-internal-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.399819 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-public-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.399874 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-combined-ca-bundle\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.399898 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-ovndb-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.399950 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-httpd-config\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.399988 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-config\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.480859 4758 generic.go:334] "Generic (PLEG): container finished" podID="25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" containerID="c783ba41ba97d546267a2a7d551d57bf2d634c9498d0ec5c9365e656091583d9" exitCode=0 Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.480936 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x2v6d" event={"ID":"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3","Type":"ContainerDied","Data":"c783ba41ba97d546267a2a7d551d57bf2d634c9498d0ec5c9365e656091583d9"} Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.503524 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-internal-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.503619 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-public-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.503689 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-combined-ca-bundle\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.503718 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-ovndb-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.509139 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-httpd-config\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.509298 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-config\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.509555 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvtpz\" (UniqueName: \"kubernetes.io/projected/0bacb926-f58c-4c06-870a-633b7a3795c5-kube-api-access-xvtpz\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.519445 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-config\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.520771 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-combined-ca-bundle\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.520921 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-ovndb-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.523957 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-internal-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.525708 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-httpd-config\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.526811 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-public-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.534637 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvtpz\" (UniqueName: \"kubernetes.io/projected/0bacb926-f58c-4c06-870a-633b7a3795c5-kube-api-access-xvtpz\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.564652 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.891503 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c24lg" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.027456 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-combined-ca-bundle\") pod \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.027980 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v55sf\" (UniqueName: \"kubernetes.io/projected/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-kube-api-access-v55sf\") pod \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.028204 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-db-sync-config-data\") pod \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.033095 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "12ff21aa-edae-4f56-a2ea-be0deb2d84d7" (UID: "12ff21aa-edae-4f56-a2ea-be0deb2d84d7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.050571 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-kube-api-access-v55sf" (OuterVolumeSpecName: "kube-api-access-v55sf") pod "12ff21aa-edae-4f56-a2ea-be0deb2d84d7" (UID: "12ff21aa-edae-4f56-a2ea-be0deb2d84d7"). InnerVolumeSpecName "kube-api-access-v55sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.058472 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12ff21aa-edae-4f56-a2ea-be0deb2d84d7" (UID: "12ff21aa-edae-4f56-a2ea-be0deb2d84d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.132659 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v55sf\" (UniqueName: \"kubernetes.io/projected/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-kube-api-access-v55sf\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.132704 4758 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.132713 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.497782 4758 generic.go:334] "Generic (PLEG): container finished" podID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerID="5417adcb1a97f55a1325215740e4c7309f2a09d3b1b9b49830ac1bd56cdf3dd4" exitCode=0 Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.497979 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bcffb56d9-w524k" event={"ID":"bf67f1ce-88a8-4255-b067-6f6a001ec6b3","Type":"ContainerDied","Data":"5417adcb1a97f55a1325215740e4c7309f2a09d3b1b9b49830ac1bd56cdf3dd4"} Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.500284 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c24lg" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.500964 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c24lg" event={"ID":"12ff21aa-edae-4f56-a2ea-be0deb2d84d7","Type":"ContainerDied","Data":"56863fe421871dd9708f2fb55d75c349707fee747f1bbd765e9d58a33bee80d2"} Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.501000 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56863fe421871dd9708f2fb55d75c349707fee747f1bbd765e9d58a33bee80d2" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.745255 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6bcffb56d9-w524k" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9696/\": dial tcp 10.217.0.155:9696: connect: connection refused" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.122907 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.152111 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-etc-machine-id\") pod \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.152204 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-combined-ca-bundle\") pod \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.152243 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-db-sync-config-data\") pod \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.152271 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-config-data\") pod \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.152337 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-scripts\") pod \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.152447 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvklh\" (UniqueName: \"kubernetes.io/projected/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-kube-api-access-rvklh\") pod \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.154199 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" (UID: "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.169073 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-kube-api-access-rvklh" (OuterVolumeSpecName: "kube-api-access-rvklh") pod "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" (UID: "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3"). InnerVolumeSpecName "kube-api-access-rvklh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.179993 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-scripts" (OuterVolumeSpecName: "scripts") pod "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" (UID: "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.192666 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" (UID: "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.240963 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" (UID: "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.254611 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvklh\" (UniqueName: \"kubernetes.io/projected/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-kube-api-access-rvklh\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.254644 4758 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.254665 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.254676 4758 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.254686 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.341361 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-9c5c6c655-zfgmj"] Jan 30 08:50:25 crc kubenswrapper[4758]: E0130 08:50:25.341798 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" containerName="cinder-db-sync" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.341811 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" containerName="cinder-db-sync" Jan 30 08:50:25 crc kubenswrapper[4758]: E0130 08:50:25.341830 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12ff21aa-edae-4f56-a2ea-be0deb2d84d7" containerName="barbican-db-sync" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.341836 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="12ff21aa-edae-4f56-a2ea-be0deb2d84d7" containerName="barbican-db-sync" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.342014 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="12ff21aa-edae-4f56-a2ea-be0deb2d84d7" containerName="barbican-db-sync" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.342049 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" containerName="cinder-db-sync" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.344695 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.366840 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-config-data\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.366920 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db351db3-71c9-4b03-98b9-68da68f45f14-logs\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.366959 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-combined-ca-bundle\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.367081 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-config-data-custom\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.367114 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rtsq\" (UniqueName: \"kubernetes.io/projected/db351db3-71c9-4b03-98b9-68da68f45f14-kube-api-access-9rtsq\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.383254 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.384306 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4bspq" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.384418 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.390592 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-config-data" (OuterVolumeSpecName: "config-data") pod "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" (UID: "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.407639 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9c5c6c655-zfgmj"] Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.457022 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5b64f54b54-68xdf"] Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.481748 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5b64f54b54-68xdf"] Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.481784 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-config-data-custom\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.481971 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rtsq\" (UniqueName: \"kubernetes.io/projected/db351db3-71c9-4b03-98b9-68da68f45f14-kube-api-access-9rtsq\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.482093 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-config-data\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.482208 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db351db3-71c9-4b03-98b9-68da68f45f14-logs\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.482281 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-combined-ca-bundle\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.482386 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.481891 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.489589 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db351db3-71c9-4b03-98b9-68da68f45f14-logs\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.500535 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.507977 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-config-data-custom\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.531504 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-config-data\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.531632 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-combined-ca-bundle\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.537858 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rtsq\" (UniqueName: \"kubernetes.io/projected/db351db3-71c9-4b03-98b9-68da68f45f14-kube-api-access-9rtsq\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.537927 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-stp4l"] Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.541418 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.555352 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-stp4l"] Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.566063 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x2v6d" event={"ID":"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3","Type":"ContainerDied","Data":"3de4a4bfe0a1b2d053131f232fdec07f1fa0eaa08093b340deb9c2fbfcbc2d4c"} Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.566103 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3de4a4bfe0a1b2d053131f232fdec07f1fa0eaa08093b340deb9c2fbfcbc2d4c" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.566162 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.583730 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-config-data\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.583781 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-logs\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.583801 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-dns-svc\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.583840 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-nb\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.583855 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrdwj\" (UniqueName: \"kubernetes.io/projected/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-kube-api-access-rrdwj\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.583872 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-combined-ca-bundle\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.583906 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-config\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.583945 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-sb\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.584000 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-config-data-custom\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.584016 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzgkp\" (UniqueName: \"kubernetes.io/projected/ed572628-0a60-4b3b-a441-d352abdd1973-kube-api-access-dzgkp\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688274 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-config-data\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688343 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-dns-svc\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688358 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-logs\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688402 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-nb\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688420 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-combined-ca-bundle\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688437 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrdwj\" (UniqueName: \"kubernetes.io/projected/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-kube-api-access-rrdwj\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688476 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-config\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688513 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-sb\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688587 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-config-data-custom\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688605 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzgkp\" (UniqueName: \"kubernetes.io/projected/ed572628-0a60-4b3b-a441-d352abdd1973-kube-api-access-dzgkp\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.694818 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-sb\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.698803 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-dns-svc\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.699410 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-nb\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.700199 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-logs\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.713727 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-combined-ca-bundle\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.717215 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-config-data-custom\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.717927 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-config\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.718302 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-config-data\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.718422 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrdwj\" (UniqueName: \"kubernetes.io/projected/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-kube-api-access-rrdwj\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.737338 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.740024 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzgkp\" (UniqueName: \"kubernetes.io/projected/ed572628-0a60-4b3b-a441-d352abdd1973-kube-api-access-dzgkp\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.851976 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-78f5565ffd-7fzt7"] Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.940688 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.966192 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.976922 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.977704 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.981231 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78f5565ffd-7fzt7"] Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.034148 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.048613 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.104965 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggd6w\" (UniqueName: \"kubernetes.io/projected/6f80a6df-a108-4559-bc45-705f737ce4a1-kube-api-access-ggd6w\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.105418 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data-custom\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.105442 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.105471 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f80a6df-a108-4559-bc45-705f737ce4a1-logs\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.105486 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-combined-ca-bundle\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.234481 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data-custom\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.234774 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.234900 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f80a6df-a108-4559-bc45-705f737ce4a1-logs\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.234981 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-combined-ca-bundle\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.235115 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggd6w\" (UniqueName: \"kubernetes.io/projected/6f80a6df-a108-4559-bc45-705f737ce4a1-kube-api-access-ggd6w\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.235924 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f80a6df-a108-4559-bc45-705f737ce4a1-logs\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.242285 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.252674 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data-custom\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.264832 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-combined-ca-bundle\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.271614 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggd6w\" (UniqueName: \"kubernetes.io/projected/6f80a6df-a108-4559-bc45-705f737ce4a1-kube-api-access-ggd6w\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.355830 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.429363 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-stp4l"] Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.507115 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f9c8c6ff5-f2sb7"] Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.535537 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-frscq"] Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.551539 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.622886 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-frscq"] Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.652003 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-config\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.652223 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c65px\" (UniqueName: \"kubernetes.io/projected/7f24b01a-1d08-4fcc-9bbc-591644e40964-kube-api-access-c65px\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.652858 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-sb\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.653001 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-nb\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.653183 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-dns-svc\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.684507 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.689204 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.701253 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerStarted","Data":"5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db"} Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.701439 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="ceilometer-central-agent" containerID="cri-o://714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340" gracePeriod=30 Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.701696 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.701748 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="proxy-httpd" containerID="cri-o://5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db" gracePeriod=30 Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.701790 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="sg-core" containerID="cri-o://c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96" gracePeriod=30 Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.701825 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="ceilometer-notification-agent" containerID="cri-o://35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8" gracePeriod=30 Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.747250 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-89x58" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.747565 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.747602 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.747580 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.756219 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-nb\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.756327 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-dns-svc\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.756384 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-config\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.756429 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c65px\" (UniqueName: \"kubernetes.io/projected/7f24b01a-1d08-4fcc-9bbc-591644e40964-kube-api-access-c65px\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.756469 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-sb\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.757528 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-sb\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.760853 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-config\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.761380 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-dns-svc\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.763216 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-nb\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.859704 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96e4936b-1f93-4777-a6a9-a13172cba649-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.860116 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.860166 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.860216 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.860242 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-scripts\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.860261 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8bqh\" (UniqueName: \"kubernetes.io/projected/96e4936b-1f93-4777-a6a9-a13172cba649-kube-api-access-b8bqh\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.863255 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.885389 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c65px\" (UniqueName: \"kubernetes.io/projected/7f24b01a-1d08-4fcc-9bbc-591644e40964-kube-api-access-c65px\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.913750 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9c5c6c655-zfgmj"] Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.925389 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.207736863 podStartE2EDuration="1m10.925361464s" podCreationTimestamp="2026-01-30 08:49:16 +0000 UTC" firstStartedPulling="2026-01-30 08:49:21.280670352 +0000 UTC m=+1166.252981903" lastFinishedPulling="2026-01-30 08:50:24.998294953 +0000 UTC m=+1229.970606504" observedRunningTime="2026-01-30 08:50:26.901109013 +0000 UTC m=+1231.873420564" watchObservedRunningTime="2026-01-30 08:50:26.925361464 +0000 UTC m=+1231.897673015" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.966099 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96e4936b-1f93-4777-a6a9-a13172cba649-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.966154 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.966205 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.966269 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.966308 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-scripts\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.966325 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8bqh\" (UniqueName: \"kubernetes.io/projected/96e4936b-1f93-4777-a6a9-a13172cba649-kube-api-access-b8bqh\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.967005 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96e4936b-1f93-4777-a6a9-a13172cba649-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.979868 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.985304 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.004749 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-scripts\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.011547 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8bqh\" (UniqueName: \"kubernetes.io/projected/96e4936b-1f93-4777-a6a9-a13172cba649-kube-api-access-b8bqh\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.017782 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.044303 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.095988 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.097589 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.120998 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.121340 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.136210 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.178775 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46b02117-6d35-4cca-8eac-ee772c3916d0-logs\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.179221 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nrlg\" (UniqueName: \"kubernetes.io/projected/46b02117-6d35-4cca-8eac-ee772c3916d0-kube-api-access-9nrlg\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.179270 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.179296 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data-custom\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.179357 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.179388 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46b02117-6d35-4cca-8eac-ee772c3916d0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.179417 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-scripts\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.281535 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nrlg\" (UniqueName: \"kubernetes.io/projected/46b02117-6d35-4cca-8eac-ee772c3916d0-kube-api-access-9nrlg\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.281600 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.281635 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data-custom\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.281678 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.281702 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46b02117-6d35-4cca-8eac-ee772c3916d0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.281731 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-scripts\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.281751 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46b02117-6d35-4cca-8eac-ee772c3916d0-logs\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.282220 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46b02117-6d35-4cca-8eac-ee772c3916d0-logs\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.291921 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46b02117-6d35-4cca-8eac-ee772c3916d0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.314245 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-scripts\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.315791 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.315791 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data-custom\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.320022 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.333979 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nrlg\" (UniqueName: \"kubernetes.io/projected/46b02117-6d35-4cca-8eac-ee772c3916d0-kube-api-access-9nrlg\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.432764 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.497705 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-stp4l"] Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.711952 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5b64f54b54-68xdf"] Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.844503 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798d46d59c-stp4l" event={"ID":"ed572628-0a60-4b3b-a441-d352abdd1973","Type":"ContainerStarted","Data":"07d78efd8cc8419582756034bb64dab8efc70ca16dace7b8b82c11dfca81ce07"} Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.844988 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f9c8c6ff5-f2sb7" event={"ID":"0bacb926-f58c-4c06-870a-633b7a3795c5","Type":"ContainerStarted","Data":"e8886208e64939d802a6a132e831c1b307211d0b8aa24f58bb02fa9b339da58e"} Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.852538 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78f5565ffd-7fzt7"] Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.873380 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9c5c6c655-zfgmj" event={"ID":"db351db3-71c9-4b03-98b9-68da68f45f14","Type":"ContainerStarted","Data":"4468ed31af8ae5e154cc77f22fd571e961b8cb309cc56d0ad78a50dfe4f053e3"} Jan 30 08:50:28 crc kubenswrapper[4758]: I0130 08:50:28.196802 4758 generic.go:334] "Generic (PLEG): container finished" podID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerID="c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96" exitCode=2 Jan 30 08:50:28 crc kubenswrapper[4758]: I0130 08:50:28.197166 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerDied","Data":"c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96"} Jan 30 08:50:28 crc kubenswrapper[4758]: I0130 08:50:28.231129 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-frscq"] Jan 30 08:50:28 crc kubenswrapper[4758]: I0130 08:50:28.500852 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:28 crc kubenswrapper[4758]: I0130 08:50:28.807529 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.222249 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96e4936b-1f93-4777-a6a9-a13172cba649","Type":"ContainerStarted","Data":"90b168ab74f0566ec17e5d43d386b2fcb8eeb79a45036e7170359c844ac4189c"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.230442 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"46b02117-6d35-4cca-8eac-ee772c3916d0","Type":"ContainerStarted","Data":"34dd11d55e470d260388ede08e80598aeb8947a7588a438590094ab2a374b774"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.237448 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" event={"ID":"7f24b01a-1d08-4fcc-9bbc-591644e40964","Type":"ContainerStarted","Data":"6d3908fc7a3657c2e0650ef09a082b68d8f4744d5e8a7add60e40d22f8b4f455"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.278503 4758 generic.go:334] "Generic (PLEG): container finished" podID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerID="0344ce2026c43ed6ac12af127c29eb8e1de75869c4b56344c9c8d96774d64c6b" exitCode=0 Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.278639 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bcffb56d9-w524k" event={"ID":"bf67f1ce-88a8-4255-b067-6f6a001ec6b3","Type":"ContainerDied","Data":"0344ce2026c43ed6ac12af127c29eb8e1de75869c4b56344c9c8d96774d64c6b"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.319367 4758 generic.go:334] "Generic (PLEG): container finished" podID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerID="35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8" exitCode=0 Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.319408 4758 generic.go:334] "Generic (PLEG): container finished" podID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerID="714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340" exitCode=0 Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.319476 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerDied","Data":"35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.319514 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerDied","Data":"714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.335913 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f9c8c6ff5-f2sb7" event={"ID":"0bacb926-f58c-4c06-870a-633b7a3795c5","Type":"ContainerStarted","Data":"94895509e9247171353ccb7b6c99b7f3ecb3c377484b617d31e0f4c5150bc05d"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.375709 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f5565ffd-7fzt7" event={"ID":"6f80a6df-a108-4559-bc45-705f737ce4a1","Type":"ContainerStarted","Data":"f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.375792 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f5565ffd-7fzt7" event={"ID":"6f80a6df-a108-4559-bc45-705f737ce4a1","Type":"ContainerStarted","Data":"42842348e2efe75cc2feccd0c07f7b19138f408619623716e0521afb772faf84"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.378922 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" event={"ID":"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f","Type":"ContainerStarted","Data":"1a200f221ada47d4363852254c7d1c361330f7347b02e22a0000e9d5cd071b19"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.602265 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.621006 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-public-tls-certs\") pod \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.621122 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-combined-ca-bundle\") pod \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.621197 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-httpd-config\") pod \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.621294 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-config\") pod \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.621342 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkxtw\" (UniqueName: \"kubernetes.io/projected/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-kube-api-access-rkxtw\") pod \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.621533 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-ovndb-tls-certs\") pod \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.621642 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-internal-tls-certs\") pod \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.669633 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "bf67f1ce-88a8-4255-b067-6f6a001ec6b3" (UID: "bf67f1ce-88a8-4255-b067-6f6a001ec6b3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.723623 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-kube-api-access-rkxtw" (OuterVolumeSpecName: "kube-api-access-rkxtw") pod "bf67f1ce-88a8-4255-b067-6f6a001ec6b3" (UID: "bf67f1ce-88a8-4255-b067-6f6a001ec6b3"). InnerVolumeSpecName "kube-api-access-rkxtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.724519 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.724560 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkxtw\" (UniqueName: \"kubernetes.io/projected/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-kube-api-access-rkxtw\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.896628 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.911174 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-config" (OuterVolumeSpecName: "config") pod "bf67f1ce-88a8-4255-b067-6f6a001ec6b3" (UID: "bf67f1ce-88a8-4255-b067-6f6a001ec6b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.915785 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bf67f1ce-88a8-4255-b067-6f6a001ec6b3" (UID: "bf67f1ce-88a8-4255-b067-6f6a001ec6b3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.915868 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf67f1ce-88a8-4255-b067-6f6a001ec6b3" (UID: "bf67f1ce-88a8-4255-b067-6f6a001ec6b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.937362 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bf67f1ce-88a8-4255-b067-6f6a001ec6b3" (UID: "bf67f1ce-88a8-4255-b067-6f6a001ec6b3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.938774 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.938807 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.938819 4758 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.938831 4758 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.990829 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "bf67f1ce-88a8-4255-b067-6f6a001ec6b3" (UID: "bf67f1ce-88a8-4255-b067-6f6a001ec6b3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.040748 4758 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.395390 4758 generic.go:334] "Generic (PLEG): container finished" podID="7f24b01a-1d08-4fcc-9bbc-591644e40964" containerID="0fad6fa55bd6734af8e520f919028effe9343fb8ce554bc0f6cdb0c2e748ee42" exitCode=0 Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.395500 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" event={"ID":"7f24b01a-1d08-4fcc-9bbc-591644e40964","Type":"ContainerDied","Data":"0fad6fa55bd6734af8e520f919028effe9343fb8ce554bc0f6cdb0c2e748ee42"} Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.410308 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bcffb56d9-w524k" event={"ID":"bf67f1ce-88a8-4255-b067-6f6a001ec6b3","Type":"ContainerDied","Data":"7e2b0cfeefa2f941937553b5eb918279c40a00098408904e6b1021d27662c44b"} Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.410396 4758 scope.go:117] "RemoveContainer" containerID="5417adcb1a97f55a1325215740e4c7309f2a09d3b1b9b49830ac1bd56cdf3dd4" Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.410652 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.431443 4758 generic.go:334] "Generic (PLEG): container finished" podID="ed572628-0a60-4b3b-a441-d352abdd1973" containerID="8d73003660a11a0b4fdfcf6f4814fd21ff002ef41709a84740c8884f6ddb8363" exitCode=0 Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.431847 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798d46d59c-stp4l" event={"ID":"ed572628-0a60-4b3b-a441-d352abdd1973","Type":"ContainerDied","Data":"8d73003660a11a0b4fdfcf6f4814fd21ff002ef41709a84740c8884f6ddb8363"} Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.482675 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f9c8c6ff5-f2sb7" event={"ID":"0bacb926-f58c-4c06-870a-633b7a3795c5","Type":"ContainerStarted","Data":"a11f21054e7e9d09710ffb5c7dae2de1b641670c33364526fad392037e11b67b"} Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.483664 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.504641 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f5565ffd-7fzt7" event={"ID":"6f80a6df-a108-4559-bc45-705f737ce4a1","Type":"ContainerStarted","Data":"a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb"} Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.505535 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.505976 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.529090 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6bcffb56d9-w524k"] Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.565974 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6bcffb56d9-w524k"] Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.568920 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6f9c8c6ff5-f2sb7" podStartSLOduration=7.568905133 podStartE2EDuration="7.568905133s" podCreationTimestamp="2026-01-30 08:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:30.559742613 +0000 UTC m=+1235.532054164" watchObservedRunningTime="2026-01-30 08:50:30.568905133 +0000 UTC m=+1235.541216684" Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.613452 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-78f5565ffd-7fzt7" podStartSLOduration=5.613420528 podStartE2EDuration="5.613420528s" podCreationTimestamp="2026-01-30 08:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:30.604417672 +0000 UTC m=+1235.576729223" watchObservedRunningTime="2026-01-30 08:50:30.613420528 +0000 UTC m=+1235.585732079" Jan 30 08:50:31 crc kubenswrapper[4758]: I0130 08:50:31.522694 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"46b02117-6d35-4cca-8eac-ee772c3916d0","Type":"ContainerStarted","Data":"cdc39ebd14d5fe4c1108b613a24ba8e7099db0c7ab701f775ab1aaaf3a839362"} Jan 30 08:50:31 crc kubenswrapper[4758]: I0130 08:50:31.795201 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" path="/var/lib/kubelet/pods/bf67f1ce-88a8-4255-b067-6f6a001ec6b3/volumes" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.252964 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.348961 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-config\") pod \"ed572628-0a60-4b3b-a441-d352abdd1973\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.349046 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-dns-svc\") pod \"ed572628-0a60-4b3b-a441-d352abdd1973\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.349317 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-nb\") pod \"ed572628-0a60-4b3b-a441-d352abdd1973\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.349372 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzgkp\" (UniqueName: \"kubernetes.io/projected/ed572628-0a60-4b3b-a441-d352abdd1973-kube-api-access-dzgkp\") pod \"ed572628-0a60-4b3b-a441-d352abdd1973\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.349405 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-sb\") pod \"ed572628-0a60-4b3b-a441-d352abdd1973\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.364543 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed572628-0a60-4b3b-a441-d352abdd1973-kube-api-access-dzgkp" (OuterVolumeSpecName: "kube-api-access-dzgkp") pod "ed572628-0a60-4b3b-a441-d352abdd1973" (UID: "ed572628-0a60-4b3b-a441-d352abdd1973"). InnerVolumeSpecName "kube-api-access-dzgkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.405449 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ed572628-0a60-4b3b-a441-d352abdd1973" (UID: "ed572628-0a60-4b3b-a441-d352abdd1973"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.411636 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ed572628-0a60-4b3b-a441-d352abdd1973" (UID: "ed572628-0a60-4b3b-a441-d352abdd1973"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.439732 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-config" (OuterVolumeSpecName: "config") pod "ed572628-0a60-4b3b-a441-d352abdd1973" (UID: "ed572628-0a60-4b3b-a441-d352abdd1973"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.444407 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed572628-0a60-4b3b-a441-d352abdd1973" (UID: "ed572628-0a60-4b3b-a441-d352abdd1973"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.457606 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.457659 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.457676 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.457692 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzgkp\" (UniqueName: \"kubernetes.io/projected/ed572628-0a60-4b3b-a441-d352abdd1973-kube-api-access-dzgkp\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.457708 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.552953 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798d46d59c-stp4l" event={"ID":"ed572628-0a60-4b3b-a441-d352abdd1973","Type":"ContainerDied","Data":"07d78efd8cc8419582756034bb64dab8efc70ca16dace7b8b82c11dfca81ce07"} Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.553120 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.618948 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-stp4l"] Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.637367 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-stp4l"] Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.731342 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-856f46cdd-mkt57"] Jan 30 08:50:32 crc kubenswrapper[4758]: E0130 08:50:32.737916 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed572628-0a60-4b3b-a441-d352abdd1973" containerName="init" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.738216 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed572628-0a60-4b3b-a441-d352abdd1973" containerName="init" Jan 30 08:50:32 crc kubenswrapper[4758]: E0130 08:50:32.738351 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-httpd" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.738447 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-httpd" Jan 30 08:50:32 crc kubenswrapper[4758]: E0130 08:50:32.738565 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-api" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.738650 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-api" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.739067 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed572628-0a60-4b3b-a441-d352abdd1973" containerName="init" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.739166 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-httpd" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.739261 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-api" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.740636 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.745131 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.745226 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.753898 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-856f46cdd-mkt57"] Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.874222 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnt9j\" (UniqueName: \"kubernetes.io/projected/e33c3e33-3106-483e-bdba-400a2911ff27-kube-api-access-hnt9j\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.874331 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-combined-ca-bundle\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.874371 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-config-data\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.874395 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-internal-tls-certs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.874795 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-public-tls-certs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.874861 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e33c3e33-3106-483e-bdba-400a2911ff27-logs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.875001 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-config-data-custom\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.976525 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-combined-ca-bundle\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.976663 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-config-data\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.976755 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-internal-tls-certs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.976847 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-public-tls-certs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.976963 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e33c3e33-3106-483e-bdba-400a2911ff27-logs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.977072 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-config-data-custom\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.977248 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnt9j\" (UniqueName: \"kubernetes.io/projected/e33c3e33-3106-483e-bdba-400a2911ff27-kube-api-access-hnt9j\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.977736 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e33c3e33-3106-483e-bdba-400a2911ff27-logs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.985146 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-config-data\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.985627 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-public-tls-certs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.988700 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-combined-ca-bundle\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.990304 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-config-data-custom\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.993809 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-internal-tls-certs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.996575 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnt9j\" (UniqueName: \"kubernetes.io/projected/e33c3e33-3106-483e-bdba-400a2911ff27-kube-api-access-hnt9j\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:33 crc kubenswrapper[4758]: I0130 08:50:33.061488 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:33 crc kubenswrapper[4758]: I0130 08:50:33.082388 4758 scope.go:117] "RemoveContainer" containerID="0344ce2026c43ed6ac12af127c29eb8e1de75869c4b56344c9c8d96774d64c6b" Jan 30 08:50:33 crc kubenswrapper[4758]: I0130 08:50:33.127333 4758 scope.go:117] "RemoveContainer" containerID="8d73003660a11a0b4fdfcf6f4814fd21ff002ef41709a84740c8884f6ddb8363" Jan 30 08:50:33 crc kubenswrapper[4758]: I0130 08:50:33.701953 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:33 crc kubenswrapper[4758]: I0130 08:50:33.745281 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" podStartSLOduration=7.745258956 podStartE2EDuration="7.745258956s" podCreationTimestamp="2026-01-30 08:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:33.733870384 +0000 UTC m=+1238.706181945" watchObservedRunningTime="2026-01-30 08:50:33.745258956 +0000 UTC m=+1238.717570507" Jan 30 08:50:33 crc kubenswrapper[4758]: I0130 08:50:33.853780 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed572628-0a60-4b3b-a441-d352abdd1973" path="/var/lib/kubelet/pods/ed572628-0a60-4b3b-a441-d352abdd1973/volumes" Jan 30 08:50:33 crc kubenswrapper[4758]: I0130 08:50:33.953719 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-856f46cdd-mkt57"] Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.747562 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" event={"ID":"7f24b01a-1d08-4fcc-9bbc-591644e40964","Type":"ContainerStarted","Data":"516401664f75cbce8e0c6bf0b65e1672368441d65029374151387707c2397642"} Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.757405 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" event={"ID":"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f","Type":"ContainerStarted","Data":"43ac1706b412ac604dd8819c1e99c31a2e0a290f8ab564316a56e3fdf868d108"} Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.757686 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" event={"ID":"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f","Type":"ContainerStarted","Data":"8a04314bf8430d5d7c2e771ad77209d299ca17a3ab7ef8eace9464a8971ebcc1"} Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.764682 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-856f46cdd-mkt57" event={"ID":"e33c3e33-3106-483e-bdba-400a2911ff27","Type":"ContainerStarted","Data":"2a1f9a134963a8ae7359c908fd6dd76f9dfcbb29ae3a680b1d8a49a73a2042cc"} Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.764738 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-856f46cdd-mkt57" event={"ID":"e33c3e33-3106-483e-bdba-400a2911ff27","Type":"ContainerStarted","Data":"fb8710a11cadf2278c15d8ea07424df193840bb6f5cd337fdbdae7feeb6e0d7d"} Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.784794 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"46b02117-6d35-4cca-8eac-ee772c3916d0","Type":"ContainerStarted","Data":"b61c04acdafa8b1f3c73d0f0ca3fbc902eb1178c23428a7de5775bf0a6dbd643"} Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.785027 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api-log" containerID="cri-o://cdc39ebd14d5fe4c1108b613a24ba8e7099db0c7ab701f775ab1aaaf3a839362" gracePeriod=30 Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.785390 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.785429 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api" containerID="cri-o://b61c04acdafa8b1f3c73d0f0ca3fbc902eb1178c23428a7de5775bf0a6dbd643" gracePeriod=30 Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.799209 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9c5c6c655-zfgmj" event={"ID":"db351db3-71c9-4b03-98b9-68da68f45f14","Type":"ContainerStarted","Data":"c73e3e6f184a18e010d163cc713f26908d4fcd3144544fba2c833abfe75dc561"} Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.799263 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9c5c6c655-zfgmj" event={"ID":"db351db3-71c9-4b03-98b9-68da68f45f14","Type":"ContainerStarted","Data":"0aa7e0241f1bc372b3cfa866dfd90c64c33e15edc3083f4e5daede07f64075ef"} Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.836215 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" podStartSLOduration=4.638685501 podStartE2EDuration="9.836194395s" podCreationTimestamp="2026-01-30 08:50:25 +0000 UTC" firstStartedPulling="2026-01-30 08:50:27.9649145 +0000 UTC m=+1232.937226041" lastFinishedPulling="2026-01-30 08:50:33.162423384 +0000 UTC m=+1238.134734935" observedRunningTime="2026-01-30 08:50:34.789116379 +0000 UTC m=+1239.761427930" watchObservedRunningTime="2026-01-30 08:50:34.836194395 +0000 UTC m=+1239.808505946" Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.877537 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.877515358 podStartE2EDuration="7.877515358s" podCreationTimestamp="2026-01-30 08:50:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:34.824774392 +0000 UTC m=+1239.797085943" watchObservedRunningTime="2026-01-30 08:50:34.877515358 +0000 UTC m=+1239.849826909" Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.879840 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-9c5c6c655-zfgmj" podStartSLOduration=3.670361718 podStartE2EDuration="9.879825121s" podCreationTimestamp="2026-01-30 08:50:25 +0000 UTC" firstStartedPulling="2026-01-30 08:50:26.932641605 +0000 UTC m=+1231.904953156" lastFinishedPulling="2026-01-30 08:50:33.142105008 +0000 UTC m=+1238.114416559" observedRunningTime="2026-01-30 08:50:34.854502558 +0000 UTC m=+1239.826814109" watchObservedRunningTime="2026-01-30 08:50:34.879825121 +0000 UTC m=+1239.852136672" Jan 30 08:50:35 crc kubenswrapper[4758]: I0130 08:50:35.810445 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96e4936b-1f93-4777-a6a9-a13172cba649","Type":"ContainerStarted","Data":"5d2eb36a855bb9311f78b9807aa198691619a0fa942b8d09db1207b8ae0b2531"} Jan 30 08:50:35 crc kubenswrapper[4758]: I0130 08:50:35.813478 4758 generic.go:334] "Generic (PLEG): container finished" podID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerID="cdc39ebd14d5fe4c1108b613a24ba8e7099db0c7ab701f775ab1aaaf3a839362" exitCode=143 Jan 30 08:50:35 crc kubenswrapper[4758]: I0130 08:50:35.813565 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"46b02117-6d35-4cca-8eac-ee772c3916d0","Type":"ContainerDied","Data":"cdc39ebd14d5fe4c1108b613a24ba8e7099db0c7ab701f775ab1aaaf3a839362"} Jan 30 08:50:35 crc kubenswrapper[4758]: I0130 08:50:35.816221 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-856f46cdd-mkt57" event={"ID":"e33c3e33-3106-483e-bdba-400a2911ff27","Type":"ContainerStarted","Data":"43fe8835acf263bba3d4baaefd72c6d009a74897d66eda20dcfc35cc50e978c6"} Jan 30 08:50:35 crc kubenswrapper[4758]: I0130 08:50:35.817887 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:35 crc kubenswrapper[4758]: I0130 08:50:35.912343 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-856f46cdd-mkt57" podStartSLOduration=3.912314774 podStartE2EDuration="3.912314774s" podCreationTimestamp="2026-01-30 08:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:35.85902107 +0000 UTC m=+1240.831332641" watchObservedRunningTime="2026-01-30 08:50:35.912314774 +0000 UTC m=+1240.884626325" Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.011808 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.011937 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.013097 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"5ac24c2e0d10a94520a56718b5cb7e279d3222a4776ef2910ce318c3d652e18a"} pod="openstack/horizon-76fc974bd8-4mnvj" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.013146 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" containerID="cri-o://5ac24c2e0d10a94520a56718b5cb7e279d3222a4776ef2910ce318c3d652e18a" gracePeriod=30 Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.044444 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.044546 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.045857 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"33c5db16903a0ea24fc43801695a658eca16a53583c2497012de7812e4d3cb27"} pod="openstack/horizon-5cf698bb7b-gp87v" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.045922 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" containerID="cri-o://33c5db16903a0ea24fc43801695a658eca16a53583c2497012de7812e4d3cb27" gracePeriod=30 Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.828435 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96e4936b-1f93-4777-a6a9-a13172cba649","Type":"ContainerStarted","Data":"fce6f3291cba9836db05bb7c0174edd5b7fc39638fb327ec393d8e4db2ced12b"} Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.828503 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.862578 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.330832156 podStartE2EDuration="10.862555202s" podCreationTimestamp="2026-01-30 08:50:26 +0000 UTC" firstStartedPulling="2026-01-30 08:50:28.740403054 +0000 UTC m=+1233.712714605" lastFinishedPulling="2026-01-30 08:50:33.2721261 +0000 UTC m=+1238.244437651" observedRunningTime="2026-01-30 08:50:36.854153265 +0000 UTC m=+1241.826464836" watchObservedRunningTime="2026-01-30 08:50:36.862555202 +0000 UTC m=+1241.834866763" Jan 30 08:50:37 crc kubenswrapper[4758]: I0130 08:50:37.122715 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.012825 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.206467 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.398276 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.495499 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-75649bd464-bvxps"] Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.497371 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.520282 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-75649bd464-bvxps"] Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.572461 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca89048c-91af-4732-8ef8-24da4618ccf9-logs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.572516 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-public-tls-certs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.572565 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-combined-ca-bundle\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.572601 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-internal-tls-certs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.572621 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-config-data\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.572668 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-scripts\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.572694 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf2xw\" (UniqueName: \"kubernetes.io/projected/ca89048c-91af-4732-8ef8-24da4618ccf9-kube-api-access-vf2xw\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.674222 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca89048c-91af-4732-8ef8-24da4618ccf9-logs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.674277 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-public-tls-certs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.674328 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-combined-ca-bundle\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.674364 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-internal-tls-certs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.674383 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-config-data\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.674433 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-scripts\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.674463 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf2xw\" (UniqueName: \"kubernetes.io/projected/ca89048c-91af-4732-8ef8-24da4618ccf9-kube-api-access-vf2xw\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.674852 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca89048c-91af-4732-8ef8-24da4618ccf9-logs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.693456 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-internal-tls-certs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.700515 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-public-tls-certs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.717618 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-config-data\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.717671 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-scripts\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.719181 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-combined-ca-bundle\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.725631 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf2xw\" (UniqueName: \"kubernetes.io/projected/ca89048c-91af-4732-8ef8-24da4618ccf9-kube-api-access-vf2xw\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.827418 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:41 crc kubenswrapper[4758]: I0130 08:50:41.439287 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:41 crc kubenswrapper[4758]: I0130 08:50:41.439635 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:41 crc kubenswrapper[4758]: I0130 08:50:41.523474 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-75649bd464-bvxps"] Jan 30 08:50:41 crc kubenswrapper[4758]: I0130 08:50:41.915916 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75649bd464-bvxps" event={"ID":"ca89048c-91af-4732-8ef8-24da4618ccf9","Type":"ContainerStarted","Data":"e406e4cffc95e5b37a40c8c543dac09c9bb3a88880092f7b275690ecba191d1f"} Jan 30 08:50:41 crc kubenswrapper[4758]: I0130 08:50:41.916591 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75649bd464-bvxps" event={"ID":"ca89048c-91af-4732-8ef8-24da4618ccf9","Type":"ContainerStarted","Data":"5de66c700ca69441159acd4761f559bd5b2fb7a93d262443fd005a85d102bd83"} Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.047270 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.131054 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-d5b7q"] Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.131369 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" podUID="062c9394-cb5b-4768-b71f-2965c61905b8" containerName="dnsmasq-dns" containerID="cri-o://f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b" gracePeriod=10 Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.839198 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.906308 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.935673 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-nb\") pod \"062c9394-cb5b-4768-b71f-2965c61905b8\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.935775 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-sb\") pod \"062c9394-cb5b-4768-b71f-2965c61905b8\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.935804 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-dns-svc\") pod \"062c9394-cb5b-4768-b71f-2965c61905b8\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.935892 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drwmm\" (UniqueName: \"kubernetes.io/projected/062c9394-cb5b-4768-b71f-2965c61905b8-kube-api-access-drwmm\") pod \"062c9394-cb5b-4768-b71f-2965c61905b8\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.935953 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-config\") pod \"062c9394-cb5b-4768-b71f-2965c61905b8\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.985055 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75649bd464-bvxps" event={"ID":"ca89048c-91af-4732-8ef8-24da4618ccf9","Type":"ContainerStarted","Data":"ad16d4424ac2a9447c86d0ee2e465b3eaa540f53dabcaf087f4315d70594023e"} Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.986183 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.986234 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.994391 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/062c9394-cb5b-4768-b71f-2965c61905b8-kube-api-access-drwmm" (OuterVolumeSpecName: "kube-api-access-drwmm") pod "062c9394-cb5b-4768-b71f-2965c61905b8" (UID: "062c9394-cb5b-4768-b71f-2965c61905b8"). InnerVolumeSpecName "kube-api-access-drwmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.020014 4758 generic.go:334] "Generic (PLEG): container finished" podID="062c9394-cb5b-4768-b71f-2965c61905b8" containerID="f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b" exitCode=0 Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.022607 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.022652 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" event={"ID":"062c9394-cb5b-4768-b71f-2965c61905b8","Type":"ContainerDied","Data":"f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b"} Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.083417 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" event={"ID":"062c9394-cb5b-4768-b71f-2965c61905b8","Type":"ContainerDied","Data":"b5a75a63374b225cb63a7afd682c9ae7eaaa46ad8ba68384e44dabbcfac3a2df"} Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.083463 4758 scope.go:117] "RemoveContainer" containerID="f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.065787 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drwmm\" (UniqueName: \"kubernetes.io/projected/062c9394-cb5b-4768-b71f-2965c61905b8-kube-api-access-drwmm\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.083841 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-75649bd464-bvxps" podStartSLOduration=3.083816909 podStartE2EDuration="3.083816909s" podCreationTimestamp="2026-01-30 08:50:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:43.038818729 +0000 UTC m=+1248.011130300" watchObservedRunningTime="2026-01-30 08:50:43.083816909 +0000 UTC m=+1248.056128460" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.127273 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-config" (OuterVolumeSpecName: "config") pod "062c9394-cb5b-4768-b71f-2965c61905b8" (UID: "062c9394-cb5b-4768-b71f-2965c61905b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.173578 4758 scope.go:117] "RemoveContainer" containerID="c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.185103 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.216247 4758 scope.go:117] "RemoveContainer" containerID="f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b" Jan 30 08:50:43 crc kubenswrapper[4758]: E0130 08:50:43.227258 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b\": container with ID starting with f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b not found: ID does not exist" containerID="f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.227307 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b"} err="failed to get container status \"f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b\": rpc error: code = NotFound desc = could not find container \"f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b\": container with ID starting with f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b not found: ID does not exist" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.227334 4758 scope.go:117] "RemoveContainer" containerID="c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.227604 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "062c9394-cb5b-4768-b71f-2965c61905b8" (UID: "062c9394-cb5b-4768-b71f-2965c61905b8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:43 crc kubenswrapper[4758]: E0130 08:50:43.228131 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035\": container with ID starting with c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035 not found: ID does not exist" containerID="c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.228166 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035"} err="failed to get container status \"c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035\": rpc error: code = NotFound desc = could not find container \"c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035\": container with ID starting with c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035 not found: ID does not exist" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.257950 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "062c9394-cb5b-4768-b71f-2965c61905b8" (UID: "062c9394-cb5b-4768-b71f-2965c61905b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.265509 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "062c9394-cb5b-4768-b71f-2965c61905b8" (UID: "062c9394-cb5b-4768-b71f-2965c61905b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.286460 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.286674 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.286765 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.414784 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-d5b7q"] Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.426716 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-d5b7q"] Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.779404 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="062c9394-cb5b-4768-b71f-2965c61905b8" path="/var/lib/kubelet/pods/062c9394-cb5b-4768-b71f-2965c61905b8/volumes" Jan 30 08:50:44 crc kubenswrapper[4758]: I0130 08:50:44.069329 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-856f46cdd-mkt57" podUID="e33c3e33-3106-483e-bdba-400a2911ff27" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:45 crc kubenswrapper[4758]: I0130 08:50:45.231250 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:45 crc kubenswrapper[4758]: I0130 08:50:45.440522 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:46 crc kubenswrapper[4758]: I0130 08:50:46.544450 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:46 crc kubenswrapper[4758]: I0130 08:50:46.546119 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:46 crc kubenswrapper[4758]: I0130 08:50:46.566813 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:46 crc kubenswrapper[4758]: I0130 08:50:46.567633 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.011514 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.142588 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.205222 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.299328 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.425741 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.485911 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.165:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.503809 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78f5565ffd-7fzt7"] Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.584387 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" podUID="062c9394-cb5b-4768-b71f-2965c61905b8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: i/o timeout" Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.922405 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:48 crc kubenswrapper[4758]: I0130 08:50:48.123367 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" containerID="cri-o://f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449" gracePeriod=30 Jan 30 08:50:48 crc kubenswrapper[4758]: I0130 08:50:48.123488 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api" containerID="cri-o://a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb" gracePeriod=30 Jan 30 08:50:48 crc kubenswrapper[4758]: I0130 08:50:48.123577 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="cinder-scheduler" containerID="cri-o://5d2eb36a855bb9311f78b9807aa198691619a0fa942b8d09db1207b8ae0b2531" gracePeriod=30 Jan 30 08:50:48 crc kubenswrapper[4758]: I0130 08:50:48.123895 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="probe" containerID="cri-o://fce6f3291cba9836db05bb7c0174edd5b7fc39638fb327ec393d8e4db2ced12b" gracePeriod=30 Jan 30 08:50:49 crc kubenswrapper[4758]: I0130 08:50:49.135961 4758 generic.go:334] "Generic (PLEG): container finished" podID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerID="f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449" exitCode=143 Jan 30 08:50:49 crc kubenswrapper[4758]: I0130 08:50:49.136078 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f5565ffd-7fzt7" event={"ID":"6f80a6df-a108-4559-bc45-705f737ce4a1","Type":"ContainerDied","Data":"f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449"} Jan 30 08:50:49 crc kubenswrapper[4758]: I0130 08:50:49.140218 4758 generic.go:334] "Generic (PLEG): container finished" podID="96e4936b-1f93-4777-a6a9-a13172cba649" containerID="fce6f3291cba9836db05bb7c0174edd5b7fc39638fb327ec393d8e4db2ced12b" exitCode=0 Jan 30 08:50:49 crc kubenswrapper[4758]: I0130 08:50:49.140281 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96e4936b-1f93-4777-a6a9-a13172cba649","Type":"ContainerDied","Data":"fce6f3291cba9836db05bb7c0174edd5b7fc39638fb327ec393d8e4db2ced12b"} Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.152929 4758 generic.go:334] "Generic (PLEG): container finished" podID="96e4936b-1f93-4777-a6a9-a13172cba649" containerID="5d2eb36a855bb9311f78b9807aa198691619a0fa942b8d09db1207b8ae0b2531" exitCode=0 Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.152981 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96e4936b-1f93-4777-a6a9-a13172cba649","Type":"ContainerDied","Data":"5d2eb36a855bb9311f78b9807aa198691619a0fa942b8d09db1207b8ae0b2531"} Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.477378 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.548751 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.556103 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-combined-ca-bundle\") pod \"96e4936b-1f93-4777-a6a9-a13172cba649\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.557650 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data-custom\") pod \"96e4936b-1f93-4777-a6a9-a13172cba649\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.557778 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8bqh\" (UniqueName: \"kubernetes.io/projected/96e4936b-1f93-4777-a6a9-a13172cba649-kube-api-access-b8bqh\") pod \"96e4936b-1f93-4777-a6a9-a13172cba649\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.557925 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96e4936b-1f93-4777-a6a9-a13172cba649-etc-machine-id\") pod \"96e4936b-1f93-4777-a6a9-a13172cba649\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.558103 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-scripts\") pod \"96e4936b-1f93-4777-a6a9-a13172cba649\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.558208 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96e4936b-1f93-4777-a6a9-a13172cba649-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "96e4936b-1f93-4777-a6a9-a13172cba649" (UID: "96e4936b-1f93-4777-a6a9-a13172cba649"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.558704 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data\") pod \"96e4936b-1f93-4777-a6a9-a13172cba649\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.559388 4758 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96e4936b-1f93-4777-a6a9-a13172cba649-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.565509 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "96e4936b-1f93-4777-a6a9-a13172cba649" (UID: "96e4936b-1f93-4777-a6a9-a13172cba649"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.569118 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e4936b-1f93-4777-a6a9-a13172cba649-kube-api-access-b8bqh" (OuterVolumeSpecName: "kube-api-access-b8bqh") pod "96e4936b-1f93-4777-a6a9-a13172cba649" (UID: "96e4936b-1f93-4777-a6a9-a13172cba649"). InnerVolumeSpecName "kube-api-access-b8bqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.591221 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-scripts" (OuterVolumeSpecName: "scripts") pod "96e4936b-1f93-4777-a6a9-a13172cba649" (UID: "96e4936b-1f93-4777-a6a9-a13172cba649"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.661321 4758 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.661560 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8bqh\" (UniqueName: \"kubernetes.io/projected/96e4936b-1f93-4777-a6a9-a13172cba649-kube-api-access-b8bqh\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.661623 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.705018 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96e4936b-1f93-4777-a6a9-a13172cba649" (UID: "96e4936b-1f93-4777-a6a9-a13172cba649"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.732133 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data" (OuterVolumeSpecName: "config-data") pod "96e4936b-1f93-4777-a6a9-a13172cba649" (UID: "96e4936b-1f93-4777-a6a9-a13172cba649"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.762883 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.762929 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.164384 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96e4936b-1f93-4777-a6a9-a13172cba649","Type":"ContainerDied","Data":"90b168ab74f0566ec17e5d43d386b2fcb8eeb79a45036e7170359c844ac4189c"} Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.164426 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.164441 4758 scope.go:117] "RemoveContainer" containerID="fce6f3291cba9836db05bb7c0174edd5b7fc39638fb327ec393d8e4db2ced12b" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.190561 4758 scope.go:117] "RemoveContainer" containerID="5d2eb36a855bb9311f78b9807aa198691619a0fa942b8d09db1207b8ae0b2531" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.218899 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.238621 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.269227 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:51 crc kubenswrapper[4758]: E0130 08:50:51.269611 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062c9394-cb5b-4768-b71f-2965c61905b8" containerName="init" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.269628 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="062c9394-cb5b-4768-b71f-2965c61905b8" containerName="init" Jan 30 08:50:51 crc kubenswrapper[4758]: E0130 08:50:51.269642 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="cinder-scheduler" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.269648 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="cinder-scheduler" Jan 30 08:50:51 crc kubenswrapper[4758]: E0130 08:50:51.269661 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062c9394-cb5b-4768-b71f-2965c61905b8" containerName="dnsmasq-dns" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.269667 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="062c9394-cb5b-4768-b71f-2965c61905b8" containerName="dnsmasq-dns" Jan 30 08:50:51 crc kubenswrapper[4758]: E0130 08:50:51.269693 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="probe" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.269698 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="probe" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.269895 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="062c9394-cb5b-4768-b71f-2965c61905b8" containerName="dnsmasq-dns" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.269918 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="cinder-scheduler" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.269938 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="probe" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.270911 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.272782 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.297922 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.357286 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.357304 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.374559 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.374716 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-scripts\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.374748 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbd50144-fe99-468f-a32b-172996d95ca1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.374777 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmc2h\" (UniqueName: \"kubernetes.io/projected/fbd50144-fe99-468f-a32b-172996d95ca1-kube-api-access-dmc2h\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.375204 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.375411 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-config-data\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.476966 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmc2h\" (UniqueName: \"kubernetes.io/projected/fbd50144-fe99-468f-a32b-172996d95ca1-kube-api-access-dmc2h\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.477085 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.477154 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-config-data\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.477274 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.477313 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-scripts\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.477338 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbd50144-fe99-468f-a32b-172996d95ca1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.477432 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbd50144-fe99-468f-a32b-172996d95ca1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.483292 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-scripts\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.485730 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.489487 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.494938 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-config-data\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.501023 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmc2h\" (UniqueName: \"kubernetes.io/projected/fbd50144-fe99-468f-a32b-172996d95ca1-kube-api-access-dmc2h\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.592391 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.746478 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.789081 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f80a6df-a108-4559-bc45-705f737ce4a1-logs\") pod \"6f80a6df-a108-4559-bc45-705f737ce4a1\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.789580 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data\") pod \"6f80a6df-a108-4559-bc45-705f737ce4a1\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.789619 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data-custom\") pod \"6f80a6df-a108-4559-bc45-705f737ce4a1\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.789999 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-combined-ca-bundle\") pod \"6f80a6df-a108-4559-bc45-705f737ce4a1\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.790111 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggd6w\" (UniqueName: \"kubernetes.io/projected/6f80a6df-a108-4559-bc45-705f737ce4a1-kube-api-access-ggd6w\") pod \"6f80a6df-a108-4559-bc45-705f737ce4a1\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.794808 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" path="/var/lib/kubelet/pods/96e4936b-1f93-4777-a6a9-a13172cba649/volumes" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.796818 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f80a6df-a108-4559-bc45-705f737ce4a1-logs" (OuterVolumeSpecName: "logs") pod "6f80a6df-a108-4559-bc45-705f737ce4a1" (UID: "6f80a6df-a108-4559-bc45-705f737ce4a1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.800597 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f80a6df-a108-4559-bc45-705f737ce4a1-kube-api-access-ggd6w" (OuterVolumeSpecName: "kube-api-access-ggd6w") pod "6f80a6df-a108-4559-bc45-705f737ce4a1" (UID: "6f80a6df-a108-4559-bc45-705f737ce4a1"). InnerVolumeSpecName "kube-api-access-ggd6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.801449 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6f80a6df-a108-4559-bc45-705f737ce4a1" (UID: "6f80a6df-a108-4559-bc45-705f737ce4a1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.841294 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f80a6df-a108-4559-bc45-705f737ce4a1" (UID: "6f80a6df-a108-4559-bc45-705f737ce4a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.904657 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f80a6df-a108-4559-bc45-705f737ce4a1-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.904688 4758 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.904706 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.904718 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggd6w\" (UniqueName: \"kubernetes.io/projected/6f80a6df-a108-4559-bc45-705f737ce4a1-kube-api-access-ggd6w\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.924357 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 08:50:51 crc kubenswrapper[4758]: E0130 08:50:51.924899 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.924923 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" Jan 30 08:50:51 crc kubenswrapper[4758]: E0130 08:50:51.924959 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.924970 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.925222 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.925273 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.927225 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.934577 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.936920 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.937235 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data" (OuterVolumeSpecName: "config-data") pod "6f80a6df-a108-4559-bc45-705f737ce4a1" (UID: "6f80a6df-a108-4559-bc45-705f737ce4a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.937416 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.937457 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-4jqf4" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.018387 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config-secret\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.018495 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.018514 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l42cg\" (UniqueName: \"kubernetes.io/projected/cd770c38-148a-45c9-bb2c-175504c327f0-kube-api-access-l42cg\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.018547 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.018878 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.120807 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config-secret\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.120960 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.120988 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l42cg\" (UniqueName: \"kubernetes.io/projected/cd770c38-148a-45c9-bb2c-175504c327f0-kube-api-access-l42cg\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.121855 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.122875 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.125658 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config-secret\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.125658 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.139843 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l42cg\" (UniqueName: \"kubernetes.io/projected/cd770c38-148a-45c9-bb2c-175504c327f0-kube-api-access-l42cg\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.175464 4758 generic.go:334] "Generic (PLEG): container finished" podID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerID="a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb" exitCode=0 Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.175589 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f5565ffd-7fzt7" event={"ID":"6f80a6df-a108-4559-bc45-705f737ce4a1","Type":"ContainerDied","Data":"a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb"} Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.175710 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.177143 4758 scope.go:117] "RemoveContainer" containerID="a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.177067 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f5565ffd-7fzt7" event={"ID":"6f80a6df-a108-4559-bc45-705f737ce4a1","Type":"ContainerDied","Data":"42842348e2efe75cc2feccd0c07f7b19138f408619623716e0521afb772faf84"} Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.255735 4758 scope.go:117] "RemoveContainer" containerID="f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.266143 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.316931 4758 scope.go:117] "RemoveContainer" containerID="a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb" Jan 30 08:50:52 crc kubenswrapper[4758]: E0130 08:50:52.325259 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb\": container with ID starting with a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb not found: ID does not exist" containerID="a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.325517 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb"} err="failed to get container status \"a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb\": rpc error: code = NotFound desc = could not find container \"a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb\": container with ID starting with a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb not found: ID does not exist" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.325635 4758 scope.go:117] "RemoveContainer" containerID="f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.328906 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78f5565ffd-7fzt7"] Jan 30 08:50:52 crc kubenswrapper[4758]: E0130 08:50:52.330303 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449\": container with ID starting with f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449 not found: ID does not exist" containerID="f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.330343 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449"} err="failed to get container status \"f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449\": rpc error: code = NotFound desc = could not find container \"f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449\": container with ID starting with f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449 not found: ID does not exist" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.373788 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-78f5565ffd-7fzt7"] Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.408185 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.442931 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.464558 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.476321 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.477953 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.493082 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.535298 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-openstack-config\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.535671 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.535785 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-openstack-config-secret\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.535939 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwdgv\" (UniqueName: \"kubernetes.io/projected/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-kube-api-access-zwdgv\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: E0130 08:50:52.580174 4758 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 08:50:52 crc kubenswrapper[4758]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_cd770c38-148a-45c9-bb2c-175504c327f0_0(5ed9d3849529876401c758a17bf267b6fd7fa6f5f7c55f2cfa40325d7da9482e): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5ed9d3849529876401c758a17bf267b6fd7fa6f5f7c55f2cfa40325d7da9482e" Netns:"/var/run/netns/a1907451-6347-43a8-b4e3-3c856c71d622" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=5ed9d3849529876401c758a17bf267b6fd7fa6f5f7c55f2cfa40325d7da9482e;K8S_POD_UID=cd770c38-148a-45c9-bb2c-175504c327f0" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/cd770c38-148a-45c9-bb2c-175504c327f0]: expected pod UID "cd770c38-148a-45c9-bb2c-175504c327f0" but got "e81b8de8-1714-4a5d-852a-e61d4bc9cd5d" from Kube API Jan 30 08:50:52 crc kubenswrapper[4758]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 08:50:52 crc kubenswrapper[4758]: > Jan 30 08:50:52 crc kubenswrapper[4758]: E0130 08:50:52.580270 4758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 08:50:52 crc kubenswrapper[4758]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_cd770c38-148a-45c9-bb2c-175504c327f0_0(5ed9d3849529876401c758a17bf267b6fd7fa6f5f7c55f2cfa40325d7da9482e): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5ed9d3849529876401c758a17bf267b6fd7fa6f5f7c55f2cfa40325d7da9482e" Netns:"/var/run/netns/a1907451-6347-43a8-b4e3-3c856c71d622" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=5ed9d3849529876401c758a17bf267b6fd7fa6f5f7c55f2cfa40325d7da9482e;K8S_POD_UID=cd770c38-148a-45c9-bb2c-175504c327f0" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/cd770c38-148a-45c9-bb2c-175504c327f0]: expected pod UID "cd770c38-148a-45c9-bb2c-175504c327f0" but got "e81b8de8-1714-4a5d-852a-e61d4bc9cd5d" from Kube API Jan 30 08:50:52 crc kubenswrapper[4758]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 08:50:52 crc kubenswrapper[4758]: > pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.638117 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-openstack-config\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.638222 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.638245 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-openstack-config-secret\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.638281 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwdgv\" (UniqueName: \"kubernetes.io/projected/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-kube-api-access-zwdgv\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.639276 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-openstack-config\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.644697 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.646376 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-openstack-config-secret\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.656846 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwdgv\" (UniqueName: \"kubernetes.io/projected/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-kube-api-access-zwdgv\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.806310 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.204705 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.204910 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fbd50144-fe99-468f-a32b-172996d95ca1","Type":"ContainerStarted","Data":"23ea4bed019ca1e75838bd0bd12af58918fb6e1073b422dbc8c8f6715618c2e3"} Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.208455 4758 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="cd770c38-148a-45c9-bb2c-175504c327f0" podUID="e81b8de8-1714-4a5d-852a-e61d4bc9cd5d" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.220451 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.368909 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config\") pod \"cd770c38-148a-45c9-bb2c-175504c327f0\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.369542 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "cd770c38-148a-45c9-bb2c-175504c327f0" (UID: "cd770c38-148a-45c9-bb2c-175504c327f0"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.369763 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config-secret\") pod \"cd770c38-148a-45c9-bb2c-175504c327f0\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.369888 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l42cg\" (UniqueName: \"kubernetes.io/projected/cd770c38-148a-45c9-bb2c-175504c327f0-kube-api-access-l42cg\") pod \"cd770c38-148a-45c9-bb2c-175504c327f0\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.369916 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-combined-ca-bundle\") pod \"cd770c38-148a-45c9-bb2c-175504c327f0\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.370589 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.378052 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd770c38-148a-45c9-bb2c-175504c327f0-kube-api-access-l42cg" (OuterVolumeSpecName: "kube-api-access-l42cg") pod "cd770c38-148a-45c9-bb2c-175504c327f0" (UID: "cd770c38-148a-45c9-bb2c-175504c327f0"). InnerVolumeSpecName "kube-api-access-l42cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.383407 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd770c38-148a-45c9-bb2c-175504c327f0" (UID: "cd770c38-148a-45c9-bb2c-175504c327f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.387190 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "cd770c38-148a-45c9-bb2c-175504c327f0" (UID: "cd770c38-148a-45c9-bb2c-175504c327f0"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.433143 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 08:50:53 crc kubenswrapper[4758]: W0130 08:50:53.462149 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode81b8de8_1714_4a5d_852a_e61d4bc9cd5d.slice/crio-9a567d25952c83a99d81bd4c23f7410476874653074151d42b621e5aef755865 WatchSource:0}: Error finding container 9a567d25952c83a99d81bd4c23f7410476874653074151d42b621e5aef755865: Status 404 returned error can't find the container with id 9a567d25952c83a99d81bd4c23f7410476874653074151d42b621e5aef755865 Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.473937 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.473975 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l42cg\" (UniqueName: \"kubernetes.io/projected/cd770c38-148a-45c9-bb2c-175504c327f0-kube-api-access-l42cg\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.473985 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.583196 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.673928 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-645778c498-xt8kb"] Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.674294 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-645778c498-xt8kb" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerName="neutron-api" containerID="cri-o://d2f284d1f78a5a02cc7d33f74c6c84ab5179e1ed0d94292246f146a3dbb86c85" gracePeriod=30 Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.674498 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-645778c498-xt8kb" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerName="neutron-httpd" containerID="cri-o://48f1de05547b3f44a9bb9e50923cbab0f98f39c1687f67b0d20d6bb8117c5d17" gracePeriod=30 Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.797845 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" path="/var/lib/kubelet/pods/6f80a6df-a108-4559-bc45-705f737ce4a1/volumes" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.798678 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd770c38-148a-45c9-bb2c-175504c327f0" path="/var/lib/kubelet/pods/cd770c38-148a-45c9-bb2c-175504c327f0/volumes" Jan 30 08:50:54 crc kubenswrapper[4758]: I0130 08:50:54.218307 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d","Type":"ContainerStarted","Data":"9a567d25952c83a99d81bd4c23f7410476874653074151d42b621e5aef755865"} Jan 30 08:50:54 crc kubenswrapper[4758]: I0130 08:50:54.220161 4758 generic.go:334] "Generic (PLEG): container finished" podID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerID="48f1de05547b3f44a9bb9e50923cbab0f98f39c1687f67b0d20d6bb8117c5d17" exitCode=0 Jan 30 08:50:54 crc kubenswrapper[4758]: I0130 08:50:54.220231 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645778c498-xt8kb" event={"ID":"b9e036dd-d20e-4488-8354-7b1079bc8113","Type":"ContainerDied","Data":"48f1de05547b3f44a9bb9e50923cbab0f98f39c1687f67b0d20d6bb8117c5d17"} Jan 30 08:50:54 crc kubenswrapper[4758]: I0130 08:50:54.225083 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:50:54 crc kubenswrapper[4758]: I0130 08:50:54.225079 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fbd50144-fe99-468f-a32b-172996d95ca1","Type":"ContainerStarted","Data":"993ef9cdf28dfec08b69082acb342cbf7d7f5eac2b2c57077d9ac7d6e5d613a7"} Jan 30 08:50:54 crc kubenswrapper[4758]: I0130 08:50:54.225163 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fbd50144-fe99-468f-a32b-172996d95ca1","Type":"ContainerStarted","Data":"ed6bab1d35aad2e4c19c2327c2665d4d0a64333e80008977623a6e72f73a205b"} Jan 30 08:50:54 crc kubenswrapper[4758]: I0130 08:50:54.267469 4758 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="cd770c38-148a-45c9-bb2c-175504c327f0" podUID="e81b8de8-1714-4a5d-852a-e61d4bc9cd5d" Jan 30 08:50:54 crc kubenswrapper[4758]: I0130 08:50:54.268852 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.268823982 podStartE2EDuration="3.268823982s" podCreationTimestamp="2026-01-30 08:50:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:54.258579256 +0000 UTC m=+1259.230890817" watchObservedRunningTime="2026-01-30 08:50:54.268823982 +0000 UTC m=+1259.241135543" Jan 30 08:50:56 crc kubenswrapper[4758]: I0130 08:50:56.593386 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.242712 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.263149 4758 generic.go:334] "Generic (PLEG): container finished" podID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerID="5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db" exitCode=137 Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.263354 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerDied","Data":"5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db"} Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.263432 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerDied","Data":"c0607f79147011ac217f4da35286aea41ac3abd388dd413c6012b3dbc3df6b9a"} Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.263536 4758 scope.go:117] "RemoveContainer" containerID="5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.263737 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.303608 4758 scope.go:117] "RemoveContainer" containerID="c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.357149 4758 scope.go:117] "RemoveContainer" containerID="35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.379429 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-log-httpd\") pod \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.379510 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-scripts\") pod \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.379541 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-config-data\") pod \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.379592 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsgzh\" (UniqueName: \"kubernetes.io/projected/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-kube-api-access-dsgzh\") pod \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.379659 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-run-httpd\") pod \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.379674 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-sg-core-conf-yaml\") pod \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.379713 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-combined-ca-bundle\") pod \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.380327 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" (UID: "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.392549 4758 scope.go:117] "RemoveContainer" containerID="714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.393688 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-scripts" (OuterVolumeSpecName: "scripts") pod "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" (UID: "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.401259 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-kube-api-access-dsgzh" (OuterVolumeSpecName: "kube-api-access-dsgzh") pod "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" (UID: "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68"). InnerVolumeSpecName "kube-api-access-dsgzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.427910 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" (UID: "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.442080 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" (UID: "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.481680 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsgzh\" (UniqueName: \"kubernetes.io/projected/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-kube-api-access-dsgzh\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.481875 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.481943 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.482007 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.482081 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.515888 4758 scope.go:117] "RemoveContainer" containerID="5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db" Jan 30 08:50:57 crc kubenswrapper[4758]: E0130 08:50:57.518245 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db\": container with ID starting with 5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db not found: ID does not exist" containerID="5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.518288 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db"} err="failed to get container status \"5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db\": rpc error: code = NotFound desc = could not find container \"5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db\": container with ID starting with 5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db not found: ID does not exist" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.518312 4758 scope.go:117] "RemoveContainer" containerID="c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96" Jan 30 08:50:57 crc kubenswrapper[4758]: E0130 08:50:57.521133 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96\": container with ID starting with c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96 not found: ID does not exist" containerID="c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.521166 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96"} err="failed to get container status \"c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96\": rpc error: code = NotFound desc = could not find container \"c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96\": container with ID starting with c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96 not found: ID does not exist" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.521183 4758 scope.go:117] "RemoveContainer" containerID="35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8" Jan 30 08:50:57 crc kubenswrapper[4758]: E0130 08:50:57.521819 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8\": container with ID starting with 35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8 not found: ID does not exist" containerID="35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.521851 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8"} err="failed to get container status \"35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8\": rpc error: code = NotFound desc = could not find container \"35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8\": container with ID starting with 35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8 not found: ID does not exist" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.521866 4758 scope.go:117] "RemoveContainer" containerID="714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340" Jan 30 08:50:57 crc kubenswrapper[4758]: E0130 08:50:57.522844 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340\": container with ID starting with 714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340 not found: ID does not exist" containerID="714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.522873 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340"} err="failed to get container status \"714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340\": rpc error: code = NotFound desc = could not find container \"714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340\": container with ID starting with 714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340 not found: ID does not exist" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.548332 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" (UID: "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.558097 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-config-data" (OuterVolumeSpecName: "config-data") pod "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" (UID: "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.585278 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.585308 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.608871 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.637118 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.661695 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:50:57 crc kubenswrapper[4758]: E0130 08:50:57.666365 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="ceilometer-central-agent" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.666398 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="ceilometer-central-agent" Jan 30 08:50:57 crc kubenswrapper[4758]: E0130 08:50:57.666417 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="sg-core" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.666425 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="sg-core" Jan 30 08:50:57 crc kubenswrapper[4758]: E0130 08:50:57.666447 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="proxy-httpd" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.666454 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="proxy-httpd" Jan 30 08:50:57 crc kubenswrapper[4758]: E0130 08:50:57.666466 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="ceilometer-notification-agent" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.666473 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="ceilometer-notification-agent" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.666689 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="ceilometer-notification-agent" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.666737 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="sg-core" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.666756 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="proxy-httpd" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.666768 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="ceilometer-central-agent" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.668579 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.675067 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.675985 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.676259 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.764633 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-75f5775999-fhl5h"] Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.790563 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.790633 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-run-httpd\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.790809 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.790870 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-config-data\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.790901 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2c7l\" (UniqueName: \"kubernetes.io/projected/456181b2-373c-4e15-abaf-b35287b20b59-kube-api-access-x2c7l\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.790937 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-log-httpd\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.790989 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-scripts\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.791361 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.799305 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.799577 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.799755 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.837707 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" path="/var/lib/kubelet/pods/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68/volumes" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.838620 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-75f5775999-fhl5h"] Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.892862 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-log-httpd\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.894147 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-public-tls-certs\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.894281 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-scripts\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.894383 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2358e5c-db98-4b7b-8b6c-2e83132655a9-run-httpd\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.894465 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2358e5c-db98-4b7b-8b6c-2e83132655a9-log-httpd\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.894759 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.895065 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-run-httpd\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.895479 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-combined-ca-bundle\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.895891 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.898194 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-internal-tls-certs\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.898341 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-config-data\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.898446 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.898977 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-config-data\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.899124 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2c7l\" (UniqueName: \"kubernetes.io/projected/456181b2-373c-4e15-abaf-b35287b20b59-kube-api-access-x2c7l\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.899215 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d87gh\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-kube-api-access-d87gh\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.893548 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-log-httpd\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.901033 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-scripts\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.896898 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-run-httpd\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.909531 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.916992 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-config-data\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.930778 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2c7l\" (UniqueName: \"kubernetes.io/projected/456181b2-373c-4e15-abaf-b35287b20b59-kube-api-access-x2c7l\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.940399 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.000512 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-public-tls-certs\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.000569 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2358e5c-db98-4b7b-8b6c-2e83132655a9-run-httpd\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.000585 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2358e5c-db98-4b7b-8b6c-2e83132655a9-log-httpd\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.000630 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-combined-ca-bundle\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.000707 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-internal-tls-certs\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.000739 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-config-data\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.000754 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.000780 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d87gh\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-kube-api-access-d87gh\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.001484 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2358e5c-db98-4b7b-8b6c-2e83132655a9-run-httpd\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: E0130 08:50:58.001624 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:50:58 crc kubenswrapper[4758]: E0130 08:50:58.001653 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:50:58 crc kubenswrapper[4758]: E0130 08:50:58.001691 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:50:58.50167499 +0000 UTC m=+1263.473986541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.001820 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2358e5c-db98-4b7b-8b6c-2e83132655a9-log-httpd\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.009924 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-public-tls-certs\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.015136 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-internal-tls-certs\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.016992 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-config-data\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.017253 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-combined-ca-bundle\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.022753 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.028201 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d87gh\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-kube-api-access-d87gh\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.297023 4758 generic.go:334] "Generic (PLEG): container finished" podID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerID="d2f284d1f78a5a02cc7d33f74c6c84ab5179e1ed0d94292246f146a3dbb86c85" exitCode=0 Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.297200 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645778c498-xt8kb" event={"ID":"b9e036dd-d20e-4488-8354-7b1079bc8113","Type":"ContainerDied","Data":"d2f284d1f78a5a02cc7d33f74c6c84ab5179e1ed0d94292246f146a3dbb86c85"} Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.511471 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: E0130 08:50:58.512115 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:50:58 crc kubenswrapper[4758]: E0130 08:50:58.512133 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:50:58 crc kubenswrapper[4758]: E0130 08:50:58.512178 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:50:59.512162933 +0000 UTC m=+1264.484474484 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.613203 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.660777 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.717474 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-combined-ca-bundle\") pod \"b9e036dd-d20e-4488-8354-7b1079bc8113\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.717636 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdj7z\" (UniqueName: \"kubernetes.io/projected/b9e036dd-d20e-4488-8354-7b1079bc8113-kube-api-access-qdj7z\") pod \"b9e036dd-d20e-4488-8354-7b1079bc8113\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.717672 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-ovndb-tls-certs\") pod \"b9e036dd-d20e-4488-8354-7b1079bc8113\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.717730 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-config\") pod \"b9e036dd-d20e-4488-8354-7b1079bc8113\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.717756 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-httpd-config\") pod \"b9e036dd-d20e-4488-8354-7b1079bc8113\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.728268 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b9e036dd-d20e-4488-8354-7b1079bc8113" (UID: "b9e036dd-d20e-4488-8354-7b1079bc8113"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.728330 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9e036dd-d20e-4488-8354-7b1079bc8113-kube-api-access-qdj7z" (OuterVolumeSpecName: "kube-api-access-qdj7z") pod "b9e036dd-d20e-4488-8354-7b1079bc8113" (UID: "b9e036dd-d20e-4488-8354-7b1079bc8113"). InnerVolumeSpecName "kube-api-access-qdj7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.779232 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-config" (OuterVolumeSpecName: "config") pod "b9e036dd-d20e-4488-8354-7b1079bc8113" (UID: "b9e036dd-d20e-4488-8354-7b1079bc8113"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.790478 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9e036dd-d20e-4488-8354-7b1079bc8113" (UID: "b9e036dd-d20e-4488-8354-7b1079bc8113"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.821219 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b9e036dd-d20e-4488-8354-7b1079bc8113" (UID: "b9e036dd-d20e-4488-8354-7b1079bc8113"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.821773 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-ovndb-tls-certs\") pod \"b9e036dd-d20e-4488-8354-7b1079bc8113\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.822994 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.823108 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdj7z\" (UniqueName: \"kubernetes.io/projected/b9e036dd-d20e-4488-8354-7b1079bc8113-kube-api-access-qdj7z\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.823190 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.823270 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:58 crc kubenswrapper[4758]: W0130 08:50:58.822005 4758 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b9e036dd-d20e-4488-8354-7b1079bc8113/volumes/kubernetes.io~secret/ovndb-tls-certs Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.824611 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b9e036dd-d20e-4488-8354-7b1079bc8113" (UID: "b9e036dd-d20e-4488-8354-7b1079bc8113"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.928153 4758 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.308478 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerStarted","Data":"d712f3f39456f410b004f641e198f9ee1a339ca9d1e26fe4b3178bcb4028fc36"} Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.310749 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645778c498-xt8kb" event={"ID":"b9e036dd-d20e-4488-8354-7b1079bc8113","Type":"ContainerDied","Data":"e0c5305909e72f305e168967cbd28a761d5403564b6070027fcc64369e555331"} Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.310825 4758 scope.go:117] "RemoveContainer" containerID="48f1de05547b3f44a9bb9e50923cbab0f98f39c1687f67b0d20d6bb8117c5d17" Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.311063 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.383280 4758 scope.go:117] "RemoveContainer" containerID="d2f284d1f78a5a02cc7d33f74c6c84ab5179e1ed0d94292246f146a3dbb86c85" Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.410565 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-645778c498-xt8kb"] Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.426338 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-645778c498-xt8kb"] Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.543008 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:59 crc kubenswrapper[4758]: E0130 08:50:59.543161 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:50:59 crc kubenswrapper[4758]: E0130 08:50:59.543298 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:50:59 crc kubenswrapper[4758]: E0130 08:50:59.543381 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:51:01.543361734 +0000 UTC m=+1266.515673285 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.781259 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" path="/var/lib/kubelet/pods/b9e036dd-d20e-4488-8354-7b1079bc8113/volumes" Jan 30 08:51:00 crc kubenswrapper[4758]: I0130 08:51:00.328447 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerStarted","Data":"056805156e46cbccce9c026e2d720a807e1f2530c8afc8d6afbacc4cb099539b"} Jan 30 08:51:00 crc kubenswrapper[4758]: I0130 08:51:00.329026 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerStarted","Data":"1929e27981ebbdc0f15097ba4c6e8187f43c6e486ed49a85e5cf2d09718e26d7"} Jan 30 08:51:01 crc kubenswrapper[4758]: I0130 08:51:01.051659 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:51:01 crc kubenswrapper[4758]: I0130 08:51:01.601269 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:51:01 crc kubenswrapper[4758]: E0130 08:51:01.601672 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:51:01 crc kubenswrapper[4758]: E0130 08:51:01.601697 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:51:01 crc kubenswrapper[4758]: E0130 08:51:01.601778 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:51:05.601747678 +0000 UTC m=+1270.574059229 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:51:01 crc kubenswrapper[4758]: I0130 08:51:01.875824 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 08:51:05 crc kubenswrapper[4758]: I0130 08:51:05.392816 4758 generic.go:334] "Generic (PLEG): container finished" podID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerID="b61c04acdafa8b1f3c73d0f0ca3fbc902eb1178c23428a7de5775bf0a6dbd643" exitCode=137 Jan 30 08:51:05 crc kubenswrapper[4758]: I0130 08:51:05.392895 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"46b02117-6d35-4cca-8eac-ee772c3916d0","Type":"ContainerDied","Data":"b61c04acdafa8b1f3c73d0f0ca3fbc902eb1178c23428a7de5775bf0a6dbd643"} Jan 30 08:51:05 crc kubenswrapper[4758]: I0130 08:51:05.693758 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:51:05 crc kubenswrapper[4758]: E0130 08:51:05.694034 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:51:05 crc kubenswrapper[4758]: E0130 08:51:05.694070 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:51:05 crc kubenswrapper[4758]: E0130 08:51:05.694126 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:51:13.694105411 +0000 UTC m=+1278.666416962 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:51:06 crc kubenswrapper[4758]: I0130 08:51:06.407584 4758 generic.go:334] "Generic (PLEG): container finished" podID="365b123c-aa7f-464d-b659-78154f86d42f" containerID="5ac24c2e0d10a94520a56718b5cb7e279d3222a4776ef2910ce318c3d652e18a" exitCode=137 Jan 30 08:51:06 crc kubenswrapper[4758]: I0130 08:51:06.407748 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerDied","Data":"5ac24c2e0d10a94520a56718b5cb7e279d3222a4776ef2910ce318c3d652e18a"} Jan 30 08:51:06 crc kubenswrapper[4758]: I0130 08:51:06.412692 4758 generic.go:334] "Generic (PLEG): container finished" podID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerID="33c5db16903a0ea24fc43801695a658eca16a53583c2497012de7812e4d3cb27" exitCode=137 Jan 30 08:51:06 crc kubenswrapper[4758]: I0130 08:51:06.412734 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf698bb7b-gp87v" event={"ID":"97906db2-3b2d-44ec-af77-d3edf75b7f76","Type":"ContainerDied","Data":"33c5db16903a0ea24fc43801695a658eca16a53583c2497012de7812e4d3cb27"} Jan 30 08:51:07 crc kubenswrapper[4758]: I0130 08:51:07.434643 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.165:8776/healthcheck\": dial tcp 10.217.0.165:8776: connect: connection refused" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.439596 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.468096 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf698bb7b-gp87v" event={"ID":"97906db2-3b2d-44ec-af77-d3edf75b7f76","Type":"ContainerStarted","Data":"a1146f1ee627c842c00198864692549d9cec1a00d0f932a99928483d26346150"} Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.492296 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nrlg\" (UniqueName: \"kubernetes.io/projected/46b02117-6d35-4cca-8eac-ee772c3916d0-kube-api-access-9nrlg\") pod \"46b02117-6d35-4cca-8eac-ee772c3916d0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.492537 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data-custom\") pod \"46b02117-6d35-4cca-8eac-ee772c3916d0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.492644 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46b02117-6d35-4cca-8eac-ee772c3916d0-logs\") pod \"46b02117-6d35-4cca-8eac-ee772c3916d0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.492774 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data\") pod \"46b02117-6d35-4cca-8eac-ee772c3916d0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.492850 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-scripts\") pod \"46b02117-6d35-4cca-8eac-ee772c3916d0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.493436 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46b02117-6d35-4cca-8eac-ee772c3916d0-logs" (OuterVolumeSpecName: "logs") pod "46b02117-6d35-4cca-8eac-ee772c3916d0" (UID: "46b02117-6d35-4cca-8eac-ee772c3916d0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.493226 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-combined-ca-bundle\") pod \"46b02117-6d35-4cca-8eac-ee772c3916d0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.495451 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46b02117-6d35-4cca-8eac-ee772c3916d0-etc-machine-id\") pod \"46b02117-6d35-4cca-8eac-ee772c3916d0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.495940 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46b02117-6d35-4cca-8eac-ee772c3916d0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.496077 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46b02117-6d35-4cca-8eac-ee772c3916d0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "46b02117-6d35-4cca-8eac-ee772c3916d0" (UID: "46b02117-6d35-4cca-8eac-ee772c3916d0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.498910 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46b02117-6d35-4cca-8eac-ee772c3916d0-kube-api-access-9nrlg" (OuterVolumeSpecName: "kube-api-access-9nrlg") pod "46b02117-6d35-4cca-8eac-ee772c3916d0" (UID: "46b02117-6d35-4cca-8eac-ee772c3916d0"). InnerVolumeSpecName "kube-api-access-9nrlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.511709 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerStarted","Data":"e4c6a196b48061d2cc6a1f8d240ab8cead89ed7d7e814ad3fae5ef4e05e106b8"} Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.514054 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "46b02117-6d35-4cca-8eac-ee772c3916d0" (UID: "46b02117-6d35-4cca-8eac-ee772c3916d0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.516571 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-scripts" (OuterVolumeSpecName: "scripts") pod "46b02117-6d35-4cca-8eac-ee772c3916d0" (UID: "46b02117-6d35-4cca-8eac-ee772c3916d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.523123 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"46b02117-6d35-4cca-8eac-ee772c3916d0","Type":"ContainerDied","Data":"34dd11d55e470d260388ede08e80598aeb8947a7588a438590094ab2a374b774"} Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.523256 4758 scope.go:117] "RemoveContainer" containerID="b61c04acdafa8b1f3c73d0f0ca3fbc902eb1178c23428a7de5775bf0a6dbd643" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.523430 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.558379 4758 scope.go:117] "RemoveContainer" containerID="cdc39ebd14d5fe4c1108b613a24ba8e7099db0c7ab701f775ab1aaaf3a839362" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.598186 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nrlg\" (UniqueName: \"kubernetes.io/projected/46b02117-6d35-4cca-8eac-ee772c3916d0-kube-api-access-9nrlg\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.598218 4758 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.598229 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.598238 4758 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46b02117-6d35-4cca-8eac-ee772c3916d0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.598612 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46b02117-6d35-4cca-8eac-ee772c3916d0" (UID: "46b02117-6d35-4cca-8eac-ee772c3916d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.660251 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data" (OuterVolumeSpecName: "config-data") pod "46b02117-6d35-4cca-8eac-ee772c3916d0" (UID: "46b02117-6d35-4cca-8eac-ee772c3916d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.699772 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.699807 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.867110 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.884852 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.923940 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:51:09 crc kubenswrapper[4758]: E0130 08:51:09.924388 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerName="neutron-httpd" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.924407 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerName="neutron-httpd" Jan 30 08:51:09 crc kubenswrapper[4758]: E0130 08:51:09.924422 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api-log" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.924428 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api-log" Jan 30 08:51:09 crc kubenswrapper[4758]: E0130 08:51:09.924440 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.924446 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api" Jan 30 08:51:09 crc kubenswrapper[4758]: E0130 08:51:09.924494 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerName="neutron-api" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.924501 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerName="neutron-api" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.924677 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.924693 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api-log" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.924703 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerName="neutron-httpd" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.924724 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerName="neutron-api" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.926750 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.928660 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.930431 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.930817 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.935634 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.006393 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6d3f5e9-e330-476b-be63-775114f987e6-logs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.006504 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vgf8\" (UniqueName: \"kubernetes.io/projected/d6d3f5e9-e330-476b-be63-775114f987e6-kube-api-access-2vgf8\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.006615 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-scripts\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.006802 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-config-data\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.007302 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.007504 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.007631 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.007741 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6d3f5e9-e330-476b-be63-775114f987e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.007916 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110218 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110274 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110323 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6d3f5e9-e330-476b-be63-775114f987e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110368 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110418 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6d3f5e9-e330-476b-be63-775114f987e6-logs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110488 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vgf8\" (UniqueName: \"kubernetes.io/projected/d6d3f5e9-e330-476b-be63-775114f987e6-kube-api-access-2vgf8\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110505 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-scripts\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110542 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-config-data\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110559 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.112055 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6d3f5e9-e330-476b-be63-775114f987e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.114639 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6d3f5e9-e330-476b-be63-775114f987e6-logs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.131494 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-config-data\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.131754 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.131795 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.132439 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.132553 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vgf8\" (UniqueName: \"kubernetes.io/projected/d6d3f5e9-e330-476b-be63-775114f987e6-kube-api-access-2vgf8\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.133371 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-scripts\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.141239 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.322511 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.543672 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerStarted","Data":"e8e2a20c9ea4e39eefa6de1113690b21340a717486562346be315bf42b27f2b0"} Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.568226 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d","Type":"ContainerStarted","Data":"b5dde43d054d2e7e158ca882b27107da67d5b7979c13958ff3d9e8a56758f87e"} Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.602446 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.892480091 podStartE2EDuration="18.602423194s" podCreationTimestamp="2026-01-30 08:50:52 +0000 UTC" firstStartedPulling="2026-01-30 08:50:53.479122636 +0000 UTC m=+1258.451434187" lastFinishedPulling="2026-01-30 08:51:09.189065749 +0000 UTC m=+1274.161377290" observedRunningTime="2026-01-30 08:51:10.590114344 +0000 UTC m=+1275.562425895" watchObservedRunningTime="2026-01-30 08:51:10.602423194 +0000 UTC m=+1275.574734745" Jan 30 08:51:10 crc kubenswrapper[4758]: W0130 08:51:10.880307 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6d3f5e9_e330_476b_be63_775114f987e6.slice/crio-33cbcedd4268ec62bd39894d0558ea3e2a562f1ce131c5a210bc60f4a55ec9e7 WatchSource:0}: Error finding container 33cbcedd4268ec62bd39894d0558ea3e2a562f1ce131c5a210bc60f4a55ec9e7: Status 404 returned error can't find the container with id 33cbcedd4268ec62bd39894d0558ea3e2a562f1ce131c5a210bc60f4a55ec9e7 Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.883992 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.582195 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d6d3f5e9-e330-476b-be63-775114f987e6","Type":"ContainerStarted","Data":"33cbcedd4268ec62bd39894d0558ea3e2a562f1ce131c5a210bc60f4a55ec9e7"} Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.600778 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerStarted","Data":"67149206f58ad0507606cce2e343d6cc655be53f23f559d70aa0ec8a091a7ab5"} Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.601705 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="ceilometer-central-agent" containerID="cri-o://1929e27981ebbdc0f15097ba4c6e8187f43c6e486ed49a85e5cf2d09718e26d7" gracePeriod=30 Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.601847 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.602323 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="ceilometer-notification-agent" containerID="cri-o://056805156e46cbccce9c026e2d720a807e1f2530c8afc8d6afbacc4cb099539b" gracePeriod=30 Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.602391 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="sg-core" containerID="cri-o://e4c6a196b48061d2cc6a1f8d240ab8cead89ed7d7e814ad3fae5ef4e05e106b8" gracePeriod=30 Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.602612 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="proxy-httpd" containerID="cri-o://67149206f58ad0507606cce2e343d6cc655be53f23f559d70aa0ec8a091a7ab5" gracePeriod=30 Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.653489 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.245507568 podStartE2EDuration="14.653461946s" podCreationTimestamp="2026-01-30 08:50:57 +0000 UTC" firstStartedPulling="2026-01-30 08:50:58.665170535 +0000 UTC m=+1263.637482086" lastFinishedPulling="2026-01-30 08:51:11.073124913 +0000 UTC m=+1276.045436464" observedRunningTime="2026-01-30 08:51:11.635558527 +0000 UTC m=+1276.607870078" watchObservedRunningTime="2026-01-30 08:51:11.653461946 +0000 UTC m=+1276.625773497" Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.784286 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" path="/var/lib/kubelet/pods/46b02117-6d35-4cca-8eac-ee772c3916d0/volumes" Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.613177 4758 generic.go:334] "Generic (PLEG): container finished" podID="456181b2-373c-4e15-abaf-b35287b20b59" containerID="e4c6a196b48061d2cc6a1f8d240ab8cead89ed7d7e814ad3fae5ef4e05e106b8" exitCode=2 Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.613514 4758 generic.go:334] "Generic (PLEG): container finished" podID="456181b2-373c-4e15-abaf-b35287b20b59" containerID="056805156e46cbccce9c026e2d720a807e1f2530c8afc8d6afbacc4cb099539b" exitCode=0 Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.613524 4758 generic.go:334] "Generic (PLEG): container finished" podID="456181b2-373c-4e15-abaf-b35287b20b59" containerID="1929e27981ebbdc0f15097ba4c6e8187f43c6e486ed49a85e5cf2d09718e26d7" exitCode=0 Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.613239 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerDied","Data":"e4c6a196b48061d2cc6a1f8d240ab8cead89ed7d7e814ad3fae5ef4e05e106b8"} Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.613636 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerDied","Data":"056805156e46cbccce9c026e2d720a807e1f2530c8afc8d6afbacc4cb099539b"} Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.613675 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerDied","Data":"1929e27981ebbdc0f15097ba4c6e8187f43c6e486ed49a85e5cf2d09718e26d7"} Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.617364 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d6d3f5e9-e330-476b-be63-775114f987e6","Type":"ContainerStarted","Data":"76f5450755f091c8a03cab7443d0eb6e42d855f698a24bfdf17cfe5abfbe0036"} Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.617406 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d6d3f5e9-e330-476b-be63-775114f987e6","Type":"ContainerStarted","Data":"59a4637a4368a6b9b62000c97418aafd3de137b3c50e1699a1b0131a7de36865"} Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.618483 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.659372 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.659353953 podStartE2EDuration="3.659353953s" podCreationTimestamp="2026-01-30 08:51:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:51:12.652594908 +0000 UTC m=+1277.624906459" watchObservedRunningTime="2026-01-30 08:51:12.659353953 +0000 UTC m=+1277.631665504" Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.864871 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-75649bd464-bvxps" Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.866373 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-75649bd464-bvxps" Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.984340 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6bdfdc4b-wwqnt"] Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.984631 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6bdfdc4b-wwqnt" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerName="placement-log" containerID="cri-o://ca13ba52ecd58780c742a36133f637fb5b60b222e69dea6e09fc29cd35f5fd19" gracePeriod=30 Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.985104 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6bdfdc4b-wwqnt" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerName="placement-api" containerID="cri-o://21679f960e51878ddb064b4d7ad7fbc76c5dcdf3143e55291739f0ba963b83c7" gracePeriod=30 Jan 30 08:51:13 crc kubenswrapper[4758]: I0130 08:51:13.632374 4758 generic.go:334] "Generic (PLEG): container finished" podID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerID="ca13ba52ecd58780c742a36133f637fb5b60b222e69dea6e09fc29cd35f5fd19" exitCode=143 Jan 30 08:51:13 crc kubenswrapper[4758]: I0130 08:51:13.632446 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bdfdc4b-wwqnt" event={"ID":"82fd1f36-9f4f-441f-959d-e2eddc79c99b","Type":"ContainerDied","Data":"ca13ba52ecd58780c742a36133f637fb5b60b222e69dea6e09fc29cd35f5fd19"} Jan 30 08:51:13 crc kubenswrapper[4758]: I0130 08:51:13.733654 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:51:13 crc kubenswrapper[4758]: E0130 08:51:13.735464 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:51:13 crc kubenswrapper[4758]: E0130 08:51:13.735487 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:51:13 crc kubenswrapper[4758]: E0130 08:51:13.735544 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:51:29.735512192 +0000 UTC m=+1294.707823743 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.009477 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.010017 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.043305 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.043358 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.189572 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-psn47"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.190777 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.219725 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-psn47"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.285812 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-operator-scripts\") pod \"nova-api-db-create-psn47\" (UID: \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\") " pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.285944 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lflrx\" (UniqueName: \"kubernetes.io/projected/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-kube-api-access-lflrx\") pod \"nova-api-db-create-psn47\" (UID: \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\") " pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.292084 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-8mvw7"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.295599 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.309348 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8mvw7"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.317506 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-7afc-account-create-update-4gfp5"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.325239 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.337261 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.386703 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7afc-account-create-update-4gfp5"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.399287 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-operator-scripts\") pod \"nova-cell0-db-create-8mvw7\" (UID: \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\") " pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.399398 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lflrx\" (UniqueName: \"kubernetes.io/projected/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-kube-api-access-lflrx\") pod \"nova-api-db-create-psn47\" (UID: \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\") " pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.399431 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbr77\" (UniqueName: \"kubernetes.io/projected/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-kube-api-access-kbr77\") pod \"nova-cell0-db-create-8mvw7\" (UID: \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\") " pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.399597 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2zm6\" (UniqueName: \"kubernetes.io/projected/227bbca8-b963-4b39-af28-ac9dbf50bc73-kube-api-access-q2zm6\") pod \"nova-api-7afc-account-create-update-4gfp5\" (UID: \"227bbca8-b963-4b39-af28-ac9dbf50bc73\") " pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.399681 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-operator-scripts\") pod \"nova-api-db-create-psn47\" (UID: \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\") " pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.399791 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/227bbca8-b963-4b39-af28-ac9dbf50bc73-operator-scripts\") pod \"nova-api-7afc-account-create-update-4gfp5\" (UID: \"227bbca8-b963-4b39-af28-ac9dbf50bc73\") " pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.400964 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-operator-scripts\") pod \"nova-api-db-create-psn47\" (UID: \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\") " pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.448398 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7kv6r"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.448423 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lflrx\" (UniqueName: \"kubernetes.io/projected/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-kube-api-access-lflrx\") pod \"nova-api-db-create-psn47\" (UID: \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\") " pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.449650 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.473324 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7kv6r"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.501629 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/227bbca8-b963-4b39-af28-ac9dbf50bc73-operator-scripts\") pod \"nova-api-7afc-account-create-update-4gfp5\" (UID: \"227bbca8-b963-4b39-af28-ac9dbf50bc73\") " pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.501952 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-operator-scripts\") pod \"nova-cell0-db-create-8mvw7\" (UID: \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\") " pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.501992 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbr77\" (UniqueName: \"kubernetes.io/projected/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-kube-api-access-kbr77\") pod \"nova-cell0-db-create-8mvw7\" (UID: \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\") " pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.502066 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2zm6\" (UniqueName: \"kubernetes.io/projected/227bbca8-b963-4b39-af28-ac9dbf50bc73-kube-api-access-q2zm6\") pod \"nova-api-7afc-account-create-update-4gfp5\" (UID: \"227bbca8-b963-4b39-af28-ac9dbf50bc73\") " pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.503135 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/227bbca8-b963-4b39-af28-ac9dbf50bc73-operator-scripts\") pod \"nova-api-7afc-account-create-update-4gfp5\" (UID: \"227bbca8-b963-4b39-af28-ac9dbf50bc73\") " pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.503693 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-operator-scripts\") pod \"nova-cell0-db-create-8mvw7\" (UID: \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\") " pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.523582 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.535920 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbr77\" (UniqueName: \"kubernetes.io/projected/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-kube-api-access-kbr77\") pod \"nova-cell0-db-create-8mvw7\" (UID: \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\") " pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.555489 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2zm6\" (UniqueName: \"kubernetes.io/projected/227bbca8-b963-4b39-af28-ac9dbf50bc73-kube-api-access-q2zm6\") pod \"nova-api-7afc-account-create-update-4gfp5\" (UID: \"227bbca8-b963-4b39-af28-ac9dbf50bc73\") " pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.608981 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-operator-scripts\") pod \"nova-cell1-db-create-7kv6r\" (UID: \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\") " pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.609073 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q78gs\" (UniqueName: \"kubernetes.io/projected/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-kube-api-access-q78gs\") pod \"nova-cell1-db-create-7kv6r\" (UID: \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\") " pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.619303 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-4e24-account-create-update-pkqqp"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.621684 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.628899 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4e24-account-create-update-pkqqp"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.631566 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.711907 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-operator-scripts\") pod \"nova-cell1-db-create-7kv6r\" (UID: \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\") " pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.711960 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q78gs\" (UniqueName: \"kubernetes.io/projected/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-kube-api-access-q78gs\") pod \"nova-cell1-db-create-7kv6r\" (UID: \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\") " pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.713375 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl7xj\" (UniqueName: \"kubernetes.io/projected/6a7e2817-2850-4d53-8ebf-4977eea68664-kube-api-access-wl7xj\") pod \"nova-cell0-4e24-account-create-update-pkqqp\" (UID: \"6a7e2817-2850-4d53-8ebf-4977eea68664\") " pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.713420 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7e2817-2850-4d53-8ebf-4977eea68664-operator-scripts\") pod \"nova-cell0-4e24-account-create-update-pkqqp\" (UID: \"6a7e2817-2850-4d53-8ebf-4977eea68664\") " pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.720887 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-operator-scripts\") pod \"nova-cell1-db-create-7kv6r\" (UID: \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\") " pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.721140 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.741673 4758 generic.go:334] "Generic (PLEG): container finished" podID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerID="21679f960e51878ddb064b4d7ad7fbc76c5dcdf3143e55291739f0ba963b83c7" exitCode=0 Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.741767 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bdfdc4b-wwqnt" event={"ID":"82fd1f36-9f4f-441f-959d-e2eddc79c99b","Type":"ContainerDied","Data":"21679f960e51878ddb064b4d7ad7fbc76c5dcdf3143e55291739f0ba963b83c7"} Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.742767 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q78gs\" (UniqueName: \"kubernetes.io/projected/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-kube-api-access-q78gs\") pod \"nova-cell1-db-create-7kv6r\" (UID: \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\") " pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.746559 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.816267 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl7xj\" (UniqueName: \"kubernetes.io/projected/6a7e2817-2850-4d53-8ebf-4977eea68664-kube-api-access-wl7xj\") pod \"nova-cell0-4e24-account-create-update-pkqqp\" (UID: \"6a7e2817-2850-4d53-8ebf-4977eea68664\") " pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.816332 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7e2817-2850-4d53-8ebf-4977eea68664-operator-scripts\") pod \"nova-cell0-4e24-account-create-update-pkqqp\" (UID: \"6a7e2817-2850-4d53-8ebf-4977eea68664\") " pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.817347 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7e2817-2850-4d53-8ebf-4977eea68664-operator-scripts\") pod \"nova-cell0-4e24-account-create-update-pkqqp\" (UID: \"6a7e2817-2850-4d53-8ebf-4977eea68664\") " pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.828822 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-649b-account-create-update-fjrbb"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.830547 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.840684 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-649b-account-create-update-fjrbb"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.844461 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl7xj\" (UniqueName: \"kubernetes.io/projected/6a7e2817-2850-4d53-8ebf-4977eea68664-kube-api-access-wl7xj\") pod \"nova-cell0-4e24-account-create-update-pkqqp\" (UID: \"6a7e2817-2850-4d53-8ebf-4977eea68664\") " pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.846541 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.918499 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqg5b\" (UniqueName: \"kubernetes.io/projected/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-kube-api-access-mqg5b\") pod \"nova-cell1-649b-account-create-update-fjrbb\" (UID: \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\") " pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.918564 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-operator-scripts\") pod \"nova-cell1-649b-account-create-update-fjrbb\" (UID: \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\") " pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.921942 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.970073 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.021536 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqg5b\" (UniqueName: \"kubernetes.io/projected/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-kube-api-access-mqg5b\") pod \"nova-cell1-649b-account-create-update-fjrbb\" (UID: \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\") " pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.021588 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-operator-scripts\") pod \"nova-cell1-649b-account-create-update-fjrbb\" (UID: \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\") " pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.022375 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-operator-scripts\") pod \"nova-cell1-649b-account-create-update-fjrbb\" (UID: \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\") " pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.036319 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.052125 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqg5b\" (UniqueName: \"kubernetes.io/projected/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-kube-api-access-mqg5b\") pod \"nova-cell1-649b-account-create-update-fjrbb\" (UID: \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\") " pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.122734 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-internal-tls-certs\") pod \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.123310 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82fd1f36-9f4f-441f-959d-e2eddc79c99b-logs\") pod \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.123389 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-scripts\") pod \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.123438 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-config-data\") pod \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.124334 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8f8l\" (UniqueName: \"kubernetes.io/projected/82fd1f36-9f4f-441f-959d-e2eddc79c99b-kube-api-access-l8f8l\") pod \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.124397 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-combined-ca-bundle\") pod \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.124439 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-public-tls-certs\") pod \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.133264 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82fd1f36-9f4f-441f-959d-e2eddc79c99b-logs" (OuterVolumeSpecName: "logs") pod "82fd1f36-9f4f-441f-959d-e2eddc79c99b" (UID: "82fd1f36-9f4f-441f-959d-e2eddc79c99b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.165077 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-scripts" (OuterVolumeSpecName: "scripts") pod "82fd1f36-9f4f-441f-959d-e2eddc79c99b" (UID: "82fd1f36-9f4f-441f-959d-e2eddc79c99b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.165618 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82fd1f36-9f4f-441f-959d-e2eddc79c99b-kube-api-access-l8f8l" (OuterVolumeSpecName: "kube-api-access-l8f8l") pod "82fd1f36-9f4f-441f-959d-e2eddc79c99b" (UID: "82fd1f36-9f4f-441f-959d-e2eddc79c99b"). InnerVolumeSpecName "kube-api-access-l8f8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.179453 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.238064 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82fd1f36-9f4f-441f-959d-e2eddc79c99b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.238108 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.238119 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8f8l\" (UniqueName: \"kubernetes.io/projected/82fd1f36-9f4f-441f-959d-e2eddc79c99b-kube-api-access-l8f8l\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.273893 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-config-data" (OuterVolumeSpecName: "config-data") pod "82fd1f36-9f4f-441f-959d-e2eddc79c99b" (UID: "82fd1f36-9f4f-441f-959d-e2eddc79c99b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.343252 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.373402 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82fd1f36-9f4f-441f-959d-e2eddc79c99b" (UID: "82fd1f36-9f4f-441f-959d-e2eddc79c99b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.380900 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "82fd1f36-9f4f-441f-959d-e2eddc79c99b" (UID: "82fd1f36-9f4f-441f-959d-e2eddc79c99b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.444586 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.445067 4758 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:17 crc kubenswrapper[4758]: W0130 08:51:17.464417 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfff40ca2_ce20_4c6a_82c7_aa2c5b744ac8.slice/crio-816cd0811492a8932eeec53efde5a091f1bb457a9f64c154350071d6d85a9f2a WatchSource:0}: Error finding container 816cd0811492a8932eeec53efde5a091f1bb457a9f64c154350071d6d85a9f2a: Status 404 returned error can't find the container with id 816cd0811492a8932eeec53efde5a091f1bb457a9f64c154350071d6d85a9f2a Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.504236 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-psn47"] Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.704049 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "82fd1f36-9f4f-441f-959d-e2eddc79c99b" (UID: "82fd1f36-9f4f-441f-959d-e2eddc79c99b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.761187 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7afc-account-create-update-4gfp5"] Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.769247 4758 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.789515 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.803664 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bdfdc4b-wwqnt" event={"ID":"82fd1f36-9f4f-441f-959d-e2eddc79c99b","Type":"ContainerDied","Data":"d71a899a17b20de9e44856675ced999b7836a65883ac57c9f3c9ad5f73066287"} Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.803712 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8mvw7"] Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.803737 4758 scope.go:117] "RemoveContainer" containerID="21679f960e51878ddb064b4d7ad7fbc76c5dcdf3143e55291739f0ba963b83c7" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.805879 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7afc-account-create-update-4gfp5" event={"ID":"227bbca8-b963-4b39-af28-ac9dbf50bc73","Type":"ContainerStarted","Data":"a2c659612010a8683ce3073643412a9fa5344a2a97ed89cbf76212a7b290550d"} Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.835481 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-psn47" event={"ID":"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8","Type":"ContainerStarted","Data":"816cd0811492a8932eeec53efde5a091f1bb457a9f64c154350071d6d85a9f2a"} Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.858952 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-psn47" podStartSLOduration=1.858932372 podStartE2EDuration="1.858932372s" podCreationTimestamp="2026-01-30 08:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:51:17.854549714 +0000 UTC m=+1282.826861265" watchObservedRunningTime="2026-01-30 08:51:17.858932372 +0000 UTC m=+1282.831243943" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.902229 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6bdfdc4b-wwqnt"] Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.909799 4758 scope.go:117] "RemoveContainer" containerID="ca13ba52ecd58780c742a36133f637fb5b60b222e69dea6e09fc29cd35f5fd19" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.914735 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6bdfdc4b-wwqnt"] Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.973982 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7kv6r"] Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.993682 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4e24-account-create-update-pkqqp"] Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.098258 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-649b-account-create-update-fjrbb"] Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.852259 4758 generic.go:334] "Generic (PLEG): container finished" podID="be7372d3-e1b3-4621-a40b-9b09fd3d7a3b" containerID="67a4a1af734c4adf4de193490eeca87888430ce68cfa2191062d7093ddc838ba" exitCode=0 Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.852390 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-649b-account-create-update-fjrbb" event={"ID":"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b","Type":"ContainerDied","Data":"67a4a1af734c4adf4de193490eeca87888430ce68cfa2191062d7093ddc838ba"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.852854 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-649b-account-create-update-fjrbb" event={"ID":"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b","Type":"ContainerStarted","Data":"487a75230b0ad7c3beb61caa1e8344bbacf4bcec914f812e0fc95cfaaf6b32f2"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.855077 4758 generic.go:334] "Generic (PLEG): container finished" podID="a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad" containerID="4f8803add769de9f15a23dbfbf03688310d1c6b3d5c93d79adba46cb16dec5ca" exitCode=0 Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.855391 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7kv6r" event={"ID":"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad","Type":"ContainerDied","Data":"4f8803add769de9f15a23dbfbf03688310d1c6b3d5c93d79adba46cb16dec5ca"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.855512 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7kv6r" event={"ID":"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad","Type":"ContainerStarted","Data":"6558b9481bf5d83a49bf2d63e49da05e1d21559501c33d60cc86bfb24fe3eccc"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.858740 4758 generic.go:334] "Generic (PLEG): container finished" podID="227bbca8-b963-4b39-af28-ac9dbf50bc73" containerID="2a553d18e8c8709e9420435d0699cca30262f29d812796ba71700ab0845f9d1c" exitCode=0 Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.858949 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7afc-account-create-update-4gfp5" event={"ID":"227bbca8-b963-4b39-af28-ac9dbf50bc73","Type":"ContainerDied","Data":"2a553d18e8c8709e9420435d0699cca30262f29d812796ba71700ab0845f9d1c"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.861540 4758 generic.go:334] "Generic (PLEG): container finished" podID="fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8" containerID="6b5f2a511f68dead3ed1b1b92615f5c6856551f3b3f434c8b9484d4f99758a12" exitCode=0 Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.861627 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-psn47" event={"ID":"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8","Type":"ContainerDied","Data":"6b5f2a511f68dead3ed1b1b92615f5c6856551f3b3f434c8b9484d4f99758a12"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.866950 4758 generic.go:334] "Generic (PLEG): container finished" podID="6a7e2817-2850-4d53-8ebf-4977eea68664" containerID="54d62ade615f4d66aac7ea9231240e05dbce2b51363637c91db0cd5147899e3f" exitCode=0 Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.867023 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" event={"ID":"6a7e2817-2850-4d53-8ebf-4977eea68664","Type":"ContainerDied","Data":"54d62ade615f4d66aac7ea9231240e05dbce2b51363637c91db0cd5147899e3f"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.867065 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" event={"ID":"6a7e2817-2850-4d53-8ebf-4977eea68664","Type":"ContainerStarted","Data":"cfdbfde9be2d4f234d018b8247245223ca5ce92f66264252d412902253aabb6e"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.871478 4758 generic.go:334] "Generic (PLEG): container finished" podID="1d2dbd4c-2c5d-4865-8cf2-ce663e060369" containerID="e022a3d99ce24d9dd10ef619189c15979640758b052ecd93de5065aa03c1b11a" exitCode=0 Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.871546 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8mvw7" event={"ID":"1d2dbd4c-2c5d-4865-8cf2-ce663e060369","Type":"ContainerDied","Data":"e022a3d99ce24d9dd10ef619189c15979640758b052ecd93de5065aa03c1b11a"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.871589 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8mvw7" event={"ID":"1d2dbd4c-2c5d-4865-8cf2-ce663e060369","Type":"ContainerStarted","Data":"b4be5192e8d6318c22c8ad8a80caa6130b1d2b7b78c4b32785798458cd453187"} Jan 30 08:51:19 crc kubenswrapper[4758]: I0130 08:51:19.781083 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" path="/var/lib/kubelet/pods/82fd1f36-9f4f-441f-959d-e2eddc79c99b/volumes" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.404212 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.547096 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7e2817-2850-4d53-8ebf-4977eea68664-operator-scripts\") pod \"6a7e2817-2850-4d53-8ebf-4977eea68664\" (UID: \"6a7e2817-2850-4d53-8ebf-4977eea68664\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.547361 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl7xj\" (UniqueName: \"kubernetes.io/projected/6a7e2817-2850-4d53-8ebf-4977eea68664-kube-api-access-wl7xj\") pod \"6a7e2817-2850-4d53-8ebf-4977eea68664\" (UID: \"6a7e2817-2850-4d53-8ebf-4977eea68664\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.548055 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a7e2817-2850-4d53-8ebf-4977eea68664-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a7e2817-2850-4d53-8ebf-4977eea68664" (UID: "6a7e2817-2850-4d53-8ebf-4977eea68664"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.555340 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7e2817-2850-4d53-8ebf-4977eea68664-kube-api-access-wl7xj" (OuterVolumeSpecName: "kube-api-access-wl7xj") pod "6a7e2817-2850-4d53-8ebf-4977eea68664" (UID: "6a7e2817-2850-4d53-8ebf-4977eea68664"). InnerVolumeSpecName "kube-api-access-wl7xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.658774 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl7xj\" (UniqueName: \"kubernetes.io/projected/6a7e2817-2850-4d53-8ebf-4977eea68664-kube-api-access-wl7xj\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.658803 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7e2817-2850-4d53-8ebf-4977eea68664-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.759805 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.774256 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.786779 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.831089 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.851965 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.865500 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-operator-scripts\") pod \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\" (UID: \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.865573 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqg5b\" (UniqueName: \"kubernetes.io/projected/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-kube-api-access-mqg5b\") pod \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\" (UID: \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.865719 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-operator-scripts\") pod \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\" (UID: \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.865754 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-operator-scripts\") pod \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\" (UID: \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.865786 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lflrx\" (UniqueName: \"kubernetes.io/projected/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-kube-api-access-lflrx\") pod \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\" (UID: \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.865819 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q78gs\" (UniqueName: \"kubernetes.io/projected/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-kube-api-access-q78gs\") pod \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\" (UID: \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.871030 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-kube-api-access-q78gs" (OuterVolumeSpecName: "kube-api-access-q78gs") pod "a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad" (UID: "a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad"). InnerVolumeSpecName "kube-api-access-q78gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.871512 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be7372d3-e1b3-4621-a40b-9b09fd3d7a3b" (UID: "be7372d3-e1b3-4621-a40b-9b09fd3d7a3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.871901 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad" (UID: "a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.872269 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8" (UID: "fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.882821 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-kube-api-access-lflrx" (OuterVolumeSpecName: "kube-api-access-lflrx") pod "fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8" (UID: "fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8"). InnerVolumeSpecName "kube-api-access-lflrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.887221 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-kube-api-access-mqg5b" (OuterVolumeSpecName: "kube-api-access-mqg5b") pod "be7372d3-e1b3-4621-a40b-9b09fd3d7a3b" (UID: "be7372d3-e1b3-4621-a40b-9b09fd3d7a3b"). InnerVolumeSpecName "kube-api-access-mqg5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.906084 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-649b-account-create-update-fjrbb" event={"ID":"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b","Type":"ContainerDied","Data":"487a75230b0ad7c3beb61caa1e8344bbacf4bcec914f812e0fc95cfaaf6b32f2"} Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.906126 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="487a75230b0ad7c3beb61caa1e8344bbacf4bcec914f812e0fc95cfaaf6b32f2" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.906194 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.920276 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.920296 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7kv6r" event={"ID":"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad","Type":"ContainerDied","Data":"6558b9481bf5d83a49bf2d63e49da05e1d21559501c33d60cc86bfb24fe3eccc"} Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.920379 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6558b9481bf5d83a49bf2d63e49da05e1d21559501c33d60cc86bfb24fe3eccc" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.924501 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7afc-account-create-update-4gfp5" event={"ID":"227bbca8-b963-4b39-af28-ac9dbf50bc73","Type":"ContainerDied","Data":"a2c659612010a8683ce3073643412a9fa5344a2a97ed89cbf76212a7b290550d"} Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.924575 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2c659612010a8683ce3073643412a9fa5344a2a97ed89cbf76212a7b290550d" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.924678 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.938404 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-psn47" event={"ID":"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8","Type":"ContainerDied","Data":"816cd0811492a8932eeec53efde5a091f1bb457a9f64c154350071d6d85a9f2a"} Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.938448 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="816cd0811492a8932eeec53efde5a091f1bb457a9f64c154350071d6d85a9f2a" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.938511 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.951955 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" event={"ID":"6a7e2817-2850-4d53-8ebf-4977eea68664","Type":"ContainerDied","Data":"cfdbfde9be2d4f234d018b8247245223ca5ce92f66264252d412902253aabb6e"} Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.951997 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfdbfde9be2d4f234d018b8247245223ca5ce92f66264252d412902253aabb6e" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.952074 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.959125 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8mvw7" event={"ID":"1d2dbd4c-2c5d-4865-8cf2-ce663e060369","Type":"ContainerDied","Data":"b4be5192e8d6318c22c8ad8a80caa6130b1d2b7b78c4b32785798458cd453187"} Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.959334 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4be5192e8d6318c22c8ad8a80caa6130b1d2b7b78c4b32785798458cd453187" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.959399 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.967897 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2zm6\" (UniqueName: \"kubernetes.io/projected/227bbca8-b963-4b39-af28-ac9dbf50bc73-kube-api-access-q2zm6\") pod \"227bbca8-b963-4b39-af28-ac9dbf50bc73\" (UID: \"227bbca8-b963-4b39-af28-ac9dbf50bc73\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.967951 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/227bbca8-b963-4b39-af28-ac9dbf50bc73-operator-scripts\") pod \"227bbca8-b963-4b39-af28-ac9dbf50bc73\" (UID: \"227bbca8-b963-4b39-af28-ac9dbf50bc73\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.968143 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-operator-scripts\") pod \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\" (UID: \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.968245 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbr77\" (UniqueName: \"kubernetes.io/projected/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-kube-api-access-kbr77\") pod \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\" (UID: \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.968961 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d2dbd4c-2c5d-4865-8cf2-ce663e060369" (UID: "1d2dbd4c-2c5d-4865-8cf2-ce663e060369"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.969164 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.969191 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqg5b\" (UniqueName: \"kubernetes.io/projected/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-kube-api-access-mqg5b\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.969209 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.969395 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.969432 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lflrx\" (UniqueName: \"kubernetes.io/projected/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-kube-api-access-lflrx\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.969445 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q78gs\" (UniqueName: \"kubernetes.io/projected/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-kube-api-access-q78gs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.969457 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.969656 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/227bbca8-b963-4b39-af28-ac9dbf50bc73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "227bbca8-b963-4b39-af28-ac9dbf50bc73" (UID: "227bbca8-b963-4b39-af28-ac9dbf50bc73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.973735 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/227bbca8-b963-4b39-af28-ac9dbf50bc73-kube-api-access-q2zm6" (OuterVolumeSpecName: "kube-api-access-q2zm6") pod "227bbca8-b963-4b39-af28-ac9dbf50bc73" (UID: "227bbca8-b963-4b39-af28-ac9dbf50bc73"). InnerVolumeSpecName "kube-api-access-q2zm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.975235 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-kube-api-access-kbr77" (OuterVolumeSpecName: "kube-api-access-kbr77") pod "1d2dbd4c-2c5d-4865-8cf2-ce663e060369" (UID: "1d2dbd4c-2c5d-4865-8cf2-ce663e060369"). InnerVolumeSpecName "kube-api-access-kbr77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:21 crc kubenswrapper[4758]: I0130 08:51:21.072410 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbr77\" (UniqueName: \"kubernetes.io/projected/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-kube-api-access-kbr77\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:21 crc kubenswrapper[4758]: I0130 08:51:21.072456 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2zm6\" (UniqueName: \"kubernetes.io/projected/227bbca8-b963-4b39-af28-ac9dbf50bc73-kube-api-access-q2zm6\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:21 crc kubenswrapper[4758]: I0130 08:51:21.072470 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/227bbca8-b963-4b39-af28-ac9dbf50bc73-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:22 crc kubenswrapper[4758]: I0130 08:51:22.056437 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:51:22 crc kubenswrapper[4758]: I0130 08:51:22.056995 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerName="glance-log" containerID="cri-o://c5be373def2d5d6ba0348da3d6e663da2b77ce86d4b3d39d71e5ba9a890af4be" gracePeriod=30 Jan 30 08:51:22 crc kubenswrapper[4758]: I0130 08:51:22.057089 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerName="glance-httpd" containerID="cri-o://133e57354834ba4048e5d9ae39382e69423ed21832e9edcfe49d508cca9e97e3" gracePeriod=30 Jan 30 08:51:22 crc kubenswrapper[4758]: I0130 08:51:22.387247 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:51:22 crc kubenswrapper[4758]: I0130 08:51:22.387325 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:51:22 crc kubenswrapper[4758]: I0130 08:51:22.977246 4758 generic.go:334] "Generic (PLEG): container finished" podID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerID="c5be373def2d5d6ba0348da3d6e663da2b77ce86d4b3d39d71e5ba9a890af4be" exitCode=143 Jan 30 08:51:22 crc kubenswrapper[4758]: I0130 08:51:22.977563 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ec2d231-4b26-4cc9-a09d-9091153da8a9","Type":"ContainerDied","Data":"c5be373def2d5d6ba0348da3d6e663da2b77ce86d4b3d39d71e5ba9a890af4be"} Jan 30 08:51:23 crc kubenswrapper[4758]: I0130 08:51:23.678090 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 08:51:23 crc kubenswrapper[4758]: I0130 08:51:23.742122 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:51:23 crc kubenswrapper[4758]: I0130 08:51:23.742333 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-log" containerID="cri-o://76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1" gracePeriod=30 Jan 30 08:51:23 crc kubenswrapper[4758]: I0130 08:51:23.742736 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-httpd" containerID="cri-o://c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea" gracePeriod=30 Jan 30 08:51:23 crc kubenswrapper[4758]: I0130 08:51:23.990048 4758 generic.go:334] "Generic (PLEG): container finished" podID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerID="76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1" exitCode=143 Jan 30 08:51:23 crc kubenswrapper[4758]: I0130 08:51:23.990308 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f","Type":"ContainerDied","Data":"76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1"} Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.013088 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.026244 4758 generic.go:334] "Generic (PLEG): container finished" podID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerID="133e57354834ba4048e5d9ae39382e69423ed21832e9edcfe49d508cca9e97e3" exitCode=0 Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.026291 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ec2d231-4b26-4cc9-a09d-9091153da8a9","Type":"ContainerDied","Data":"133e57354834ba4048e5d9ae39382e69423ed21832e9edcfe49d508cca9e97e3"} Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.026319 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ec2d231-4b26-4cc9-a09d-9091153da8a9","Type":"ContainerDied","Data":"a7359b09c33ee03bcd96bf4b4d5dd4f4f882e30e5f48059a01c80017926d9869"} Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.026330 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7359b09c33ee03bcd96bf4b4d5dd4f4f882e30e5f48059a01c80017926d9869" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.027087 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.044940 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.179985 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-logs\") pod \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.180592 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-public-tls-certs\") pod \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.180527 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-logs" (OuterVolumeSpecName: "logs") pod "1ec2d231-4b26-4cc9-a09d-9091153da8a9" (UID: "1ec2d231-4b26-4cc9-a09d-9091153da8a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.180658 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-httpd-run\") pod \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.180901 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1ec2d231-4b26-4cc9-a09d-9091153da8a9" (UID: "1ec2d231-4b26-4cc9-a09d-9091153da8a9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.180923 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-scripts\") pod \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.180958 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-combined-ca-bundle\") pod \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.180987 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-config-data\") pod \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.181014 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.181051 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mbs4\" (UniqueName: \"kubernetes.io/projected/1ec2d231-4b26-4cc9-a09d-9091153da8a9-kube-api-access-9mbs4\") pod \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.181465 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.181480 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.186718 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "1ec2d231-4b26-4cc9-a09d-9091153da8a9" (UID: "1ec2d231-4b26-4cc9-a09d-9091153da8a9"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.188670 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-scripts" (OuterVolumeSpecName: "scripts") pod "1ec2d231-4b26-4cc9-a09d-9091153da8a9" (UID: "1ec2d231-4b26-4cc9-a09d-9091153da8a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.191014 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec2d231-4b26-4cc9-a09d-9091153da8a9-kube-api-access-9mbs4" (OuterVolumeSpecName: "kube-api-access-9mbs4") pod "1ec2d231-4b26-4cc9-a09d-9091153da8a9" (UID: "1ec2d231-4b26-4cc9-a09d-9091153da8a9"). InnerVolumeSpecName "kube-api-access-9mbs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.252222 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ec2d231-4b26-4cc9-a09d-9091153da8a9" (UID: "1ec2d231-4b26-4cc9-a09d-9091153da8a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.274079 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-config-data" (OuterVolumeSpecName: "config-data") pod "1ec2d231-4b26-4cc9-a09d-9091153da8a9" (UID: "1ec2d231-4b26-4cc9-a09d-9091153da8a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.287993 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.288025 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.288051 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.288071 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.288081 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mbs4\" (UniqueName: \"kubernetes.io/projected/1ec2d231-4b26-4cc9-a09d-9091153da8a9-kube-api-access-9mbs4\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.301624 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1ec2d231-4b26-4cc9-a09d-9091153da8a9" (UID: "1ec2d231-4b26-4cc9-a09d-9091153da8a9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.317140 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.389474 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.389513 4758 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665220 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-chr6l"] Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665803 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerName="placement-api" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665816 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerName="placement-api" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665830 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerName="placement-log" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665836 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerName="placement-log" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665847 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be7372d3-e1b3-4621-a40b-9b09fd3d7a3b" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665854 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="be7372d3-e1b3-4621-a40b-9b09fd3d7a3b" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665865 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665871 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665888 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerName="glance-log" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665894 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerName="glance-log" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665904 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d2dbd4c-2c5d-4865-8cf2-ce663e060369" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665911 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d2dbd4c-2c5d-4865-8cf2-ce663e060369" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665921 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665928 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665943 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227bbca8-b963-4b39-af28-ac9dbf50bc73" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665948 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="227bbca8-b963-4b39-af28-ac9dbf50bc73" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665960 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7e2817-2850-4d53-8ebf-4977eea68664" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665966 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7e2817-2850-4d53-8ebf-4977eea68664" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665976 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerName="glance-httpd" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665982 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerName="glance-httpd" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666186 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerName="placement-api" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666198 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="227bbca8-b963-4b39-af28-ac9dbf50bc73" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666210 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666222 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerName="glance-log" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666230 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666238 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerName="placement-log" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666250 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerName="glance-httpd" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666259 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="be7372d3-e1b3-4621-a40b-9b09fd3d7a3b" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666269 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d2dbd4c-2c5d-4865-8cf2-ce663e060369" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666279 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7e2817-2850-4d53-8ebf-4977eea68664" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666846 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.670423 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wlml7" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.671812 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.678799 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.698511 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-chr6l"] Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.802592 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-config-data\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.802724 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.802779 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-scripts\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.802831 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s5dr\" (UniqueName: \"kubernetes.io/projected/06a33948-1e21-49dd-9f48-b4c188ae6e9d-kube-api-access-4s5dr\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.904712 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-scripts\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.904764 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s5dr\" (UniqueName: \"kubernetes.io/projected/06a33948-1e21-49dd-9f48-b4c188ae6e9d-kube-api-access-4s5dr\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.904941 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-config-data\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.905145 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.912114 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.913909 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-config-data\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.916646 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-scripts\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.928850 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s5dr\" (UniqueName: \"kubernetes.io/projected/06a33948-1e21-49dd-9f48-b4c188ae6e9d-kube-api-access-4s5dr\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.984311 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.042481 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.102467 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.156640 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.168914 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.170424 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.174584 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.174831 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.218289 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.328254 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.328330 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e6a95ad-6f31-4494-9caf-5eea1c43e005-logs\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.328373 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6glh8\" (UniqueName: \"kubernetes.io/projected/9e6a95ad-6f31-4494-9caf-5eea1c43e005-kube-api-access-6glh8\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.328428 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-config-data\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.328456 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.328479 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.328504 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e6a95ad-6f31-4494-9caf-5eea1c43e005-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.328519 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-scripts\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.430443 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e6a95ad-6f31-4494-9caf-5eea1c43e005-logs\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.430502 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6glh8\" (UniqueName: \"kubernetes.io/projected/9e6a95ad-6f31-4494-9caf-5eea1c43e005-kube-api-access-6glh8\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.430559 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-config-data\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.430584 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.430608 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.430632 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e6a95ad-6f31-4494-9caf-5eea1c43e005-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.430649 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-scripts\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.430699 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.439612 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.443424 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.446369 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e6a95ad-6f31-4494-9caf-5eea1c43e005-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.449258 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e6a95ad-6f31-4494-9caf-5eea1c43e005-logs\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.455619 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-scripts\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.468620 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.474838 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-config-data\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.480921 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6glh8\" (UniqueName: \"kubernetes.io/projected/9e6a95ad-6f31-4494-9caf-5eea1c43e005-kube-api-access-6glh8\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.500530 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.530519 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.672641 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": dial tcp 10.217.0.150:9292: connect: connection refused" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.673141 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": dial tcp 10.217.0.150:9292: connect: connection refused" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.696545 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-chr6l"] Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.795958 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" path="/var/lib/kubelet/pods/1ec2d231-4b26-4cc9-a09d-9091153da8a9/volumes" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.914209 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.060762 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-chr6l" event={"ID":"06a33948-1e21-49dd-9f48-b4c188ae6e9d","Type":"ContainerStarted","Data":"ff53183262e2ed46b22c3c8849edd4292f90605608d79f5735d4ef8ac3de71f3"} Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.063477 4758 generic.go:334] "Generic (PLEG): container finished" podID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerID="c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea" exitCode=0 Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.063511 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f","Type":"ContainerDied","Data":"c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea"} Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.063534 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f","Type":"ContainerDied","Data":"09ea2b79cbc0aef888ba0674456b14d2b50865632ce01fc1d4296c4eec420a06"} Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.063560 4758 scope.go:117] "RemoveContainer" containerID="c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.063748 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.064220 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmcr9\" (UniqueName: \"kubernetes.io/projected/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-kube-api-access-mmcr9\") pod \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.064318 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-logs\") pod \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.064412 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-internal-tls-certs\") pod \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.064494 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.064553 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-combined-ca-bundle\") pod \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.064611 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-httpd-run\") pod \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.064665 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-scripts\") pod \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.064798 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-config-data\") pod \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.067906 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-logs" (OuterVolumeSpecName: "logs") pod "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" (UID: "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.068895 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" (UID: "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.082637 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.113751 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.113787 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.123931 4758 scope.go:117] "RemoveContainer" containerID="76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.124112 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-scripts" (OuterVolumeSpecName: "scripts") pod "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" (UID: "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.148200 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" (UID: "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.148321 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-kube-api-access-mmcr9" (OuterVolumeSpecName: "kube-api-access-mmcr9") pod "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" (UID: "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f"). InnerVolumeSpecName "kube-api-access-mmcr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.197093 4758 scope.go:117] "RemoveContainer" containerID="c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea" Jan 30 08:51:28 crc kubenswrapper[4758]: E0130 08:51:28.198740 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea\": container with ID starting with c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea not found: ID does not exist" containerID="c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.198774 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea"} err="failed to get container status \"c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea\": rpc error: code = NotFound desc = could not find container \"c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea\": container with ID starting with c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea not found: ID does not exist" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.198804 4758 scope.go:117] "RemoveContainer" containerID="76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1" Jan 30 08:51:28 crc kubenswrapper[4758]: E0130 08:51:28.204006 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1\": container with ID starting with 76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1 not found: ID does not exist" containerID="76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.204794 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1"} err="failed to get container status \"76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1\": rpc error: code = NotFound desc = could not find container \"76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1\": container with ID starting with 76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1 not found: ID does not exist" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.216409 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.216439 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.216448 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmcr9\" (UniqueName: \"kubernetes.io/projected/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-kube-api-access-mmcr9\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.247435 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-config-data" (OuterVolumeSpecName: "config-data") pod "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" (UID: "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.247876 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" (UID: "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.257602 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" (UID: "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.280582 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.323923 4758 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.323956 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.323970 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.323983 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.353656 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.435468 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.448093 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.472686 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:51:28 crc kubenswrapper[4758]: E0130 08:51:28.477782 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-httpd" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.477812 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-httpd" Jan 30 08:51:28 crc kubenswrapper[4758]: E0130 08:51:28.477828 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-log" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.477833 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-log" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.478095 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-httpd" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.478117 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-log" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.479158 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.488316 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.499334 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.500607 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.637603 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fa932ed-7bd7-4827-a24f-e29c15c9b563-logs\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.637694 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.637779 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fa932ed-7bd7-4827-a24f-e29c15c9b563-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.637827 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.637855 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.637888 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmm5q\" (UniqueName: \"kubernetes.io/projected/7fa932ed-7bd7-4827-a24f-e29c15c9b563-kube-api-access-gmm5q\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.638001 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.638024 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.739853 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.740221 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.740304 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fa932ed-7bd7-4827-a24f-e29c15c9b563-logs\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.740337 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.740400 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fa932ed-7bd7-4827-a24f-e29c15c9b563-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.740448 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.740471 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.740495 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmm5q\" (UniqueName: \"kubernetes.io/projected/7fa932ed-7bd7-4827-a24f-e29c15c9b563-kube-api-access-gmm5q\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.741920 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.744988 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fa932ed-7bd7-4827-a24f-e29c15c9b563-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.745710 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fa932ed-7bd7-4827-a24f-e29c15c9b563-logs\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.764641 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.766601 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.772383 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.773218 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmm5q\" (UniqueName: \"kubernetes.io/projected/7fa932ed-7bd7-4827-a24f-e29c15c9b563-kube-api-access-gmm5q\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.794332 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.836782 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.917795 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:29 crc kubenswrapper[4758]: I0130 08:51:29.119092 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e6a95ad-6f31-4494-9caf-5eea1c43e005","Type":"ContainerStarted","Data":"391a2503f40d24ef8468ab0cf86e82db544fd638bbd7f6f4bdf182461131be1e"} Jan 30 08:51:29 crc kubenswrapper[4758]: W0130 08:51:29.763288 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fa932ed_7bd7_4827_a24f_e29c15c9b563.slice/crio-c4442073f86fc6b56fa9e8dbc0c6024a6c77fcc591509ca090b5c5136280362c WatchSource:0}: Error finding container c4442073f86fc6b56fa9e8dbc0c6024a6c77fcc591509ca090b5c5136280362c: Status 404 returned error can't find the container with id c4442073f86fc6b56fa9e8dbc0c6024a6c77fcc591509ca090b5c5136280362c Jan 30 08:51:29 crc kubenswrapper[4758]: I0130 08:51:29.764111 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:51:29 crc kubenswrapper[4758]: E0130 08:51:29.764310 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:51:29 crc kubenswrapper[4758]: E0130 08:51:29.764330 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:51:29 crc kubenswrapper[4758]: E0130 08:51:29.764387 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:52:01.764371731 +0000 UTC m=+1326.736683282 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:51:29 crc kubenswrapper[4758]: I0130 08:51:29.793201 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" path="/var/lib/kubelet/pods/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f/volumes" Jan 30 08:51:29 crc kubenswrapper[4758]: I0130 08:51:29.794073 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:51:30 crc kubenswrapper[4758]: I0130 08:51:30.197852 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e6a95ad-6f31-4494-9caf-5eea1c43e005","Type":"ContainerStarted","Data":"e0d2754618477bb743ceff0dadebc2378bd1d077661a715cd83f31912f354dcf"} Jan 30 08:51:30 crc kubenswrapper[4758]: I0130 08:51:30.206141 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7fa932ed-7bd7-4827-a24f-e29c15c9b563","Type":"ContainerStarted","Data":"c4442073f86fc6b56fa9e8dbc0c6024a6c77fcc591509ca090b5c5136280362c"} Jan 30 08:51:30 crc kubenswrapper[4758]: I0130 08:51:30.331256 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="d6d3f5e9-e330-476b-be63-775114f987e6" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.173:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:51:31 crc kubenswrapper[4758]: I0130 08:51:31.267689 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7fa932ed-7bd7-4827-a24f-e29c15c9b563","Type":"ContainerStarted","Data":"3abe4e9fe60381a5aef6d17c1d31d12ae165a4d405b4c3caadd447c18bfde8b4"} Jan 30 08:51:32 crc kubenswrapper[4758]: I0130 08:51:32.288911 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e6a95ad-6f31-4494-9caf-5eea1c43e005","Type":"ContainerStarted","Data":"b35731ce8d97af63524dc2599a821b9957ee8d4fbfb0c8d7c135f659261e2579"} Jan 30 08:51:32 crc kubenswrapper[4758]: I0130 08:51:32.298513 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7fa932ed-7bd7-4827-a24f-e29c15c9b563","Type":"ContainerStarted","Data":"1f0c188b26791f3f363cf7d39f9343081b6efaa5d32eed1b6bb8fa592574a9f7"} Jan 30 08:51:32 crc kubenswrapper[4758]: I0130 08:51:32.328187 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.328157757 podStartE2EDuration="5.328157757s" podCreationTimestamp="2026-01-30 08:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:51:32.317304352 +0000 UTC m=+1297.289615893" watchObservedRunningTime="2026-01-30 08:51:32.328157757 +0000 UTC m=+1297.300469318" Jan 30 08:51:32 crc kubenswrapper[4758]: I0130 08:51:32.354142 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.354122812 podStartE2EDuration="4.354122812s" podCreationTimestamp="2026-01-30 08:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:51:32.349542996 +0000 UTC m=+1297.321854547" watchObservedRunningTime="2026-01-30 08:51:32.354122812 +0000 UTC m=+1297.326434363" Jan 30 08:51:36 crc kubenswrapper[4758]: I0130 08:51:36.010939 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:51:36 crc kubenswrapper[4758]: I0130 08:51:36.044577 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 08:51:37 crc kubenswrapper[4758]: I0130 08:51:37.531358 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 08:51:37 crc kubenswrapper[4758]: I0130 08:51:37.531706 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 08:51:37 crc kubenswrapper[4758]: I0130 08:51:37.575181 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 08:51:37 crc kubenswrapper[4758]: I0130 08:51:37.579275 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 08:51:38 crc kubenswrapper[4758]: I0130 08:51:38.373825 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 08:51:38 crc kubenswrapper[4758]: I0130 08:51:38.373879 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 08:51:38 crc kubenswrapper[4758]: I0130 08:51:38.919348 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:38 crc kubenswrapper[4758]: I0130 08:51:38.919702 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:38 crc kubenswrapper[4758]: I0130 08:51:38.981961 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:39 crc kubenswrapper[4758]: I0130 08:51:39.044823 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:39 crc kubenswrapper[4758]: I0130 08:51:39.382151 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:39 crc kubenswrapper[4758]: I0130 08:51:39.382180 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:41 crc kubenswrapper[4758]: I0130 08:51:41.402865 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:51:41 crc kubenswrapper[4758]: I0130 08:51:41.403203 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:51:42 crc kubenswrapper[4758]: I0130 08:51:42.442022 4758 generic.go:334] "Generic (PLEG): container finished" podID="456181b2-373c-4e15-abaf-b35287b20b59" containerID="67149206f58ad0507606cce2e343d6cc655be53f23f559d70aa0ec8a091a7ab5" exitCode=137 Jan 30 08:51:42 crc kubenswrapper[4758]: I0130 08:51:42.442206 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerDied","Data":"67149206f58ad0507606cce2e343d6cc655be53f23f559d70aa0ec8a091a7ab5"} Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.038891 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.039977 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.501686 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerDied","Data":"d712f3f39456f410b004f641e198f9ee1a339ca9d1e26fe4b3178bcb4028fc36"} Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.502030 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d712f3f39456f410b004f641e198f9ee1a339ca9d1e26fe4b3178bcb4028fc36" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.521777 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.521895 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.556749 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.568149 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.714314 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-config-data\") pod \"456181b2-373c-4e15-abaf-b35287b20b59\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.714741 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-run-httpd\") pod \"456181b2-373c-4e15-abaf-b35287b20b59\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.714784 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-scripts\") pod \"456181b2-373c-4e15-abaf-b35287b20b59\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.714807 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-sg-core-conf-yaml\") pod \"456181b2-373c-4e15-abaf-b35287b20b59\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.714893 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-combined-ca-bundle\") pod \"456181b2-373c-4e15-abaf-b35287b20b59\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.714945 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2c7l\" (UniqueName: \"kubernetes.io/projected/456181b2-373c-4e15-abaf-b35287b20b59-kube-api-access-x2c7l\") pod \"456181b2-373c-4e15-abaf-b35287b20b59\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.714978 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-log-httpd\") pod \"456181b2-373c-4e15-abaf-b35287b20b59\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.717514 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "456181b2-373c-4e15-abaf-b35287b20b59" (UID: "456181b2-373c-4e15-abaf-b35287b20b59"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.719622 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "456181b2-373c-4e15-abaf-b35287b20b59" (UID: "456181b2-373c-4e15-abaf-b35287b20b59"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.728342 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456181b2-373c-4e15-abaf-b35287b20b59-kube-api-access-x2c7l" (OuterVolumeSpecName: "kube-api-access-x2c7l") pod "456181b2-373c-4e15-abaf-b35287b20b59" (UID: "456181b2-373c-4e15-abaf-b35287b20b59"). InnerVolumeSpecName "kube-api-access-x2c7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.733273 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-scripts" (OuterVolumeSpecName: "scripts") pod "456181b2-373c-4e15-abaf-b35287b20b59" (UID: "456181b2-373c-4e15-abaf-b35287b20b59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.818006 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.818085 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.818098 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2c7l\" (UniqueName: \"kubernetes.io/projected/456181b2-373c-4e15-abaf-b35287b20b59-kube-api-access-x2c7l\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.818109 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.850257 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.856175 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "456181b2-373c-4e15-abaf-b35287b20b59" (UID: "456181b2-373c-4e15-abaf-b35287b20b59"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.866181 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "456181b2-373c-4e15-abaf-b35287b20b59" (UID: "456181b2-373c-4e15-abaf-b35287b20b59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.913239 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-config-data" (OuterVolumeSpecName: "config-data") pod "456181b2-373c-4e15-abaf-b35287b20b59" (UID: "456181b2-373c-4e15-abaf-b35287b20b59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.920264 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.920499 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.920564 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.511122 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-chr6l" event={"ID":"06a33948-1e21-49dd-9f48-b4c188ae6e9d","Type":"ContainerStarted","Data":"638aaa1aba025b2a9d201ffacd37903c3bef07833c13c4c8b9d80743c4260d8f"} Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.511198 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.532297 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-chr6l" podStartSLOduration=2.854086464 podStartE2EDuration="19.532280647s" podCreationTimestamp="2026-01-30 08:51:26 +0000 UTC" firstStartedPulling="2026-01-30 08:51:27.710198871 +0000 UTC m=+1292.682510422" lastFinishedPulling="2026-01-30 08:51:44.388393054 +0000 UTC m=+1309.360704605" observedRunningTime="2026-01-30 08:51:45.531336086 +0000 UTC m=+1310.503647647" watchObservedRunningTime="2026-01-30 08:51:45.532280647 +0000 UTC m=+1310.504592198" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.575799 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.582614 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594112 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:51:45 crc kubenswrapper[4758]: E0130 08:51:45.594526 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="ceilometer-notification-agent" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594544 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="ceilometer-notification-agent" Jan 30 08:51:45 crc kubenswrapper[4758]: E0130 08:51:45.594559 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="ceilometer-central-agent" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594565 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="ceilometer-central-agent" Jan 30 08:51:45 crc kubenswrapper[4758]: E0130 08:51:45.594587 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="proxy-httpd" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594593 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="proxy-httpd" Jan 30 08:51:45 crc kubenswrapper[4758]: E0130 08:51:45.594605 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="sg-core" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594611 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="sg-core" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594796 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="proxy-httpd" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594816 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="ceilometer-notification-agent" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594832 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="ceilometer-central-agent" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594843 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="sg-core" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.602941 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.605993 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.613433 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.621497 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.634584 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-config-data\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.634719 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-run-httpd\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.634746 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-scripts\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.634764 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.634797 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bqwk\" (UniqueName: \"kubernetes.io/projected/63c569ce-66c7-4001-8478-3f20fb34b143-kube-api-access-4bqwk\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.634820 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-log-httpd\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.634850 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.736630 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-run-httpd\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.736958 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-scripts\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.736988 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.737152 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-run-httpd\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.737032 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bqwk\" (UniqueName: \"kubernetes.io/projected/63c569ce-66c7-4001-8478-3f20fb34b143-kube-api-access-4bqwk\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.737789 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-log-httpd\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.737830 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.737928 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-config-data\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.738163 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-log-httpd\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.748966 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-scripts\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.749513 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-config-data\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.750737 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.772276 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.819530 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bqwk\" (UniqueName: \"kubernetes.io/projected/63c569ce-66c7-4001-8478-3f20fb34b143-kube-api-access-4bqwk\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.853184 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456181b2-373c-4e15-abaf-b35287b20b59" path="/var/lib/kubelet/pods/456181b2-373c-4e15-abaf-b35287b20b59/volumes" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.922452 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.012066 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.012163 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.013104 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"e8e2a20c9ea4e39eefa6de1113690b21340a717486562346be315bf42b27f2b0"} pod="openstack/horizon-76fc974bd8-4mnvj" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.013151 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" containerID="cri-o://e8e2a20c9ea4e39eefa6de1113690b21340a717486562346be315bf42b27f2b0" gracePeriod=30 Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.046897 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.046985 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.047693 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"a1146f1ee627c842c00198864692549d9cec1a00d0f932a99928483d26346150"} pod="openstack/horizon-5cf698bb7b-gp87v" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.047736 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" containerID="cri-o://a1146f1ee627c842c00198864692549d9cec1a00d0f932a99928483d26346150" gracePeriod=30 Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.568563 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:51:47 crc kubenswrapper[4758]: I0130 08:51:47.540880 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerStarted","Data":"e237696dd8d561d61802d4c992bbd607d9c5b3b20249129551ad1a402c39f754"} Jan 30 08:51:48 crc kubenswrapper[4758]: I0130 08:51:48.549702 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerStarted","Data":"0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22"} Jan 30 08:51:49 crc kubenswrapper[4758]: I0130 08:51:49.559860 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerStarted","Data":"5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7"} Jan 30 08:51:50 crc kubenswrapper[4758]: I0130 08:51:50.571422 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerStarted","Data":"0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb"} Jan 30 08:51:52 crc kubenswrapper[4758]: I0130 08:51:52.387509 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:51:52 crc kubenswrapper[4758]: I0130 08:51:52.387825 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:51:52 crc kubenswrapper[4758]: I0130 08:51:52.610826 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerStarted","Data":"418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9"} Jan 30 08:51:52 crc kubenswrapper[4758]: I0130 08:51:52.611018 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:51:52 crc kubenswrapper[4758]: I0130 08:51:52.643941 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.837196706 podStartE2EDuration="7.64390948s" podCreationTimestamp="2026-01-30 08:51:45 +0000 UTC" firstStartedPulling="2026-01-30 08:51:46.639820664 +0000 UTC m=+1311.612132215" lastFinishedPulling="2026-01-30 08:51:51.446533438 +0000 UTC m=+1316.418844989" observedRunningTime="2026-01-30 08:51:52.636278518 +0000 UTC m=+1317.608590089" watchObservedRunningTime="2026-01-30 08:51:52.64390948 +0000 UTC m=+1317.616221031" Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.394670 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.395689 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="ceilometer-central-agent" containerID="cri-o://0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22" gracePeriod=30 Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.396248 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="proxy-httpd" containerID="cri-o://418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9" gracePeriod=30 Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.396336 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="sg-core" containerID="cri-o://0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb" gracePeriod=30 Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.396381 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="ceilometer-notification-agent" containerID="cri-o://5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7" gracePeriod=30 Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.653201 4758 generic.go:334] "Generic (PLEG): container finished" podID="63c569ce-66c7-4001-8478-3f20fb34b143" containerID="418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9" exitCode=0 Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.653495 4758 generic.go:334] "Generic (PLEG): container finished" podID="63c569ce-66c7-4001-8478-3f20fb34b143" containerID="0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb" exitCode=2 Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.653435 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerDied","Data":"418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9"} Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.653526 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerDied","Data":"0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb"} Jan 30 08:51:58 crc kubenswrapper[4758]: I0130 08:51:58.664185 4758 generic.go:334] "Generic (PLEG): container finished" podID="63c569ce-66c7-4001-8478-3f20fb34b143" containerID="5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7" exitCode=0 Jan 30 08:51:58 crc kubenswrapper[4758]: I0130 08:51:58.664231 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerDied","Data":"5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7"} Jan 30 08:51:59 crc kubenswrapper[4758]: I0130 08:51:59.021621 4758 scope.go:117] "RemoveContainer" containerID="cf06190608af651f83eb58cbffe05edad66dc7e1e8fac5df28ea762a91e937ce" Jan 30 08:51:59 crc kubenswrapper[4758]: I0130 08:51:59.046983 4758 scope.go:117] "RemoveContainer" containerID="4ac71eee89e7259c48423a040dbed7ad9542787e6ed5a91e7cacd0c60611852e" Jan 30 08:51:59 crc kubenswrapper[4758]: I0130 08:51:59.067996 4758 scope.go:117] "RemoveContainer" containerID="69a7ca6a35440c8d6a6b5c64641df205b1f58c73899c4a6f6ad67e7fef9baab8" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.497248 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.608361 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-config-data\") pod \"63c569ce-66c7-4001-8478-3f20fb34b143\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.608749 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bqwk\" (UniqueName: \"kubernetes.io/projected/63c569ce-66c7-4001-8478-3f20fb34b143-kube-api-access-4bqwk\") pod \"63c569ce-66c7-4001-8478-3f20fb34b143\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.608950 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-combined-ca-bundle\") pod \"63c569ce-66c7-4001-8478-3f20fb34b143\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.609063 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-log-httpd\") pod \"63c569ce-66c7-4001-8478-3f20fb34b143\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.609097 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-run-httpd\") pod \"63c569ce-66c7-4001-8478-3f20fb34b143\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.609135 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-scripts\") pod \"63c569ce-66c7-4001-8478-3f20fb34b143\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.609160 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-sg-core-conf-yaml\") pod \"63c569ce-66c7-4001-8478-3f20fb34b143\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.609445 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "63c569ce-66c7-4001-8478-3f20fb34b143" (UID: "63c569ce-66c7-4001-8478-3f20fb34b143"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.609517 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "63c569ce-66c7-4001-8478-3f20fb34b143" (UID: "63c569ce-66c7-4001-8478-3f20fb34b143"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.609711 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.609734 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.615176 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-scripts" (OuterVolumeSpecName: "scripts") pod "63c569ce-66c7-4001-8478-3f20fb34b143" (UID: "63c569ce-66c7-4001-8478-3f20fb34b143"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.627060 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c569ce-66c7-4001-8478-3f20fb34b143-kube-api-access-4bqwk" (OuterVolumeSpecName: "kube-api-access-4bqwk") pod "63c569ce-66c7-4001-8478-3f20fb34b143" (UID: "63c569ce-66c7-4001-8478-3f20fb34b143"). InnerVolumeSpecName "kube-api-access-4bqwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.650206 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "63c569ce-66c7-4001-8478-3f20fb34b143" (UID: "63c569ce-66c7-4001-8478-3f20fb34b143"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.682229 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63c569ce-66c7-4001-8478-3f20fb34b143" (UID: "63c569ce-66c7-4001-8478-3f20fb34b143"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.693707 4758 generic.go:334] "Generic (PLEG): container finished" podID="63c569ce-66c7-4001-8478-3f20fb34b143" containerID="0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22" exitCode=0 Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.693755 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerDied","Data":"0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22"} Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.693789 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerDied","Data":"e237696dd8d561d61802d4c992bbd607d9c5b3b20249129551ad1a402c39f754"} Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.693809 4758 scope.go:117] "RemoveContainer" containerID="418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.693966 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.711984 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.712012 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.712022 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.712031 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bqwk\" (UniqueName: \"kubernetes.io/projected/63c569ce-66c7-4001-8478-3f20fb34b143-kube-api-access-4bqwk\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.721869 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-config-data" (OuterVolumeSpecName: "config-data") pod "63c569ce-66c7-4001-8478-3f20fb34b143" (UID: "63c569ce-66c7-4001-8478-3f20fb34b143"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.751571 4758 scope.go:117] "RemoveContainer" containerID="0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.775874 4758 scope.go:117] "RemoveContainer" containerID="5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.808776 4758 scope.go:117] "RemoveContainer" containerID="0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.813318 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:52:01 crc kubenswrapper[4758]: E0130 08:52:01.813604 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:52:01 crc kubenswrapper[4758]: E0130 08:52:01.813691 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:52:01 crc kubenswrapper[4758]: E0130 08:52:01.813839 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:53:05.813820145 +0000 UTC m=+1390.786131696 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.813721 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.831548 4758 scope.go:117] "RemoveContainer" containerID="418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9" Jan 30 08:52:01 crc kubenswrapper[4758]: E0130 08:52:01.838704 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9\": container with ID starting with 418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9 not found: ID does not exist" containerID="418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.838812 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9"} err="failed to get container status \"418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9\": rpc error: code = NotFound desc = could not find container \"418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9\": container with ID starting with 418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9 not found: ID does not exist" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.838888 4758 scope.go:117] "RemoveContainer" containerID="0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb" Jan 30 08:52:01 crc kubenswrapper[4758]: E0130 08:52:01.839263 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb\": container with ID starting with 0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb not found: ID does not exist" containerID="0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.839287 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb"} err="failed to get container status \"0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb\": rpc error: code = NotFound desc = could not find container \"0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb\": container with ID starting with 0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb not found: ID does not exist" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.839301 4758 scope.go:117] "RemoveContainer" containerID="5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7" Jan 30 08:52:01 crc kubenswrapper[4758]: E0130 08:52:01.839508 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7\": container with ID starting with 5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7 not found: ID does not exist" containerID="5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.839528 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7"} err="failed to get container status \"5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7\": rpc error: code = NotFound desc = could not find container \"5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7\": container with ID starting with 5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7 not found: ID does not exist" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.839542 4758 scope.go:117] "RemoveContainer" containerID="0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22" Jan 30 08:52:01 crc kubenswrapper[4758]: E0130 08:52:01.839873 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22\": container with ID starting with 0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22 not found: ID does not exist" containerID="0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.839930 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22"} err="failed to get container status \"0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22\": rpc error: code = NotFound desc = could not find container \"0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22\": container with ID starting with 0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22 not found: ID does not exist" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.018777 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.026897 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.046475 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:02 crc kubenswrapper[4758]: E0130 08:52:02.046956 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="ceilometer-central-agent" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.046980 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="ceilometer-central-agent" Jan 30 08:52:02 crc kubenswrapper[4758]: E0130 08:52:02.046994 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="sg-core" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.047002 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="sg-core" Jan 30 08:52:02 crc kubenswrapper[4758]: E0130 08:52:02.047024 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="ceilometer-notification-agent" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.047032 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="ceilometer-notification-agent" Jan 30 08:52:02 crc kubenswrapper[4758]: E0130 08:52:02.047372 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="proxy-httpd" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.047390 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="proxy-httpd" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.047605 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="sg-core" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.047630 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="ceilometer-central-agent" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.047646 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="ceilometer-notification-agent" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.047669 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="proxy-httpd" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.049760 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.059386 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.059549 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.065795 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.120103 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-scripts\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.120154 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-run-httpd\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.120182 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.120212 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-log-httpd\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.120258 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xtwk\" (UniqueName: \"kubernetes.io/projected/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-kube-api-access-8xtwk\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.120319 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-config-data\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.120340 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.222299 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xtwk\" (UniqueName: \"kubernetes.io/projected/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-kube-api-access-8xtwk\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.222410 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-config-data\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.222498 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.222579 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-scripts\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.222608 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-run-httpd\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.222625 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.222655 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-log-httpd\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.223110 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-log-httpd\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.223721 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-run-httpd\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.226868 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-scripts\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.226956 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.227851 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.228888 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-config-data\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.244082 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xtwk\" (UniqueName: \"kubernetes.io/projected/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-kube-api-access-8xtwk\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.366102 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.850958 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:02 crc kubenswrapper[4758]: W0130 08:52:02.907150 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8782f55b_f6c6_4fa1_bf56_1cea92cac9c9.slice/crio-65dc36a2f4e4d15659ca98b06b09b9503be01a5727ed309a172875a20e2d3981 WatchSource:0}: Error finding container 65dc36a2f4e4d15659ca98b06b09b9503be01a5727ed309a172875a20e2d3981: Status 404 returned error can't find the container with id 65dc36a2f4e4d15659ca98b06b09b9503be01a5727ed309a172875a20e2d3981 Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.917461 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:52:03 crc kubenswrapper[4758]: I0130 08:52:03.711363 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerStarted","Data":"7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e"} Jan 30 08:52:03 crc kubenswrapper[4758]: I0130 08:52:03.711668 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerStarted","Data":"65dc36a2f4e4d15659ca98b06b09b9503be01a5727ed309a172875a20e2d3981"} Jan 30 08:52:03 crc kubenswrapper[4758]: I0130 08:52:03.712844 4758 generic.go:334] "Generic (PLEG): container finished" podID="06a33948-1e21-49dd-9f48-b4c188ae6e9d" containerID="638aaa1aba025b2a9d201ffacd37903c3bef07833c13c4c8b9d80743c4260d8f" exitCode=0 Jan 30 08:52:03 crc kubenswrapper[4758]: I0130 08:52:03.712873 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-chr6l" event={"ID":"06a33948-1e21-49dd-9f48-b4c188ae6e9d","Type":"ContainerDied","Data":"638aaa1aba025b2a9d201ffacd37903c3bef07833c13c4c8b9d80743c4260d8f"} Jan 30 08:52:03 crc kubenswrapper[4758]: I0130 08:52:03.780802 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" path="/var/lib/kubelet/pods/63c569ce-66c7-4001-8478-3f20fb34b143/volumes" Jan 30 08:52:04 crc kubenswrapper[4758]: I0130 08:52:04.722572 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerStarted","Data":"e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54"} Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.056375 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.200763 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-config-data\") pod \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.200859 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-combined-ca-bundle\") pod \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.200918 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s5dr\" (UniqueName: \"kubernetes.io/projected/06a33948-1e21-49dd-9f48-b4c188ae6e9d-kube-api-access-4s5dr\") pod \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.200949 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-scripts\") pod \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.206165 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-scripts" (OuterVolumeSpecName: "scripts") pod "06a33948-1e21-49dd-9f48-b4c188ae6e9d" (UID: "06a33948-1e21-49dd-9f48-b4c188ae6e9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.206350 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a33948-1e21-49dd-9f48-b4c188ae6e9d-kube-api-access-4s5dr" (OuterVolumeSpecName: "kube-api-access-4s5dr") pod "06a33948-1e21-49dd-9f48-b4c188ae6e9d" (UID: "06a33948-1e21-49dd-9f48-b4c188ae6e9d"). InnerVolumeSpecName "kube-api-access-4s5dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.240330 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-config-data" (OuterVolumeSpecName: "config-data") pod "06a33948-1e21-49dd-9f48-b4c188ae6e9d" (UID: "06a33948-1e21-49dd-9f48-b4c188ae6e9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.247505 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06a33948-1e21-49dd-9f48-b4c188ae6e9d" (UID: "06a33948-1e21-49dd-9f48-b4c188ae6e9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.303626 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.303662 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.303693 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s5dr\" (UniqueName: \"kubernetes.io/projected/06a33948-1e21-49dd-9f48-b4c188ae6e9d-kube-api-access-4s5dr\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.303701 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.736278 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-chr6l" event={"ID":"06a33948-1e21-49dd-9f48-b4c188ae6e9d","Type":"ContainerDied","Data":"ff53183262e2ed46b22c3c8849edd4292f90605608d79f5735d4ef8ac3de71f3"} Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.737722 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff53183262e2ed46b22c3c8849edd4292f90605608d79f5735d4ef8ac3de71f3" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.736567 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.741326 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerStarted","Data":"0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241"} Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.863611 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:05 crc kubenswrapper[4758]: E0130 08:52:05.864030 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a33948-1e21-49dd-9f48-b4c188ae6e9d" containerName="nova-cell0-conductor-db-sync" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.864064 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a33948-1e21-49dd-9f48-b4c188ae6e9d" containerName="nova-cell0-conductor-db-sync" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.864311 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a33948-1e21-49dd-9f48-b4c188ae6e9d" containerName="nova-cell0-conductor-db-sync" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.865171 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.871091 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wlml7" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.871195 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.900074 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.040028 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg4s2\" (UniqueName: \"kubernetes.io/projected/35f68976-d7c9-453e-a02c-cd4119a55e3b-kube-api-access-hg4s2\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.040107 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.040272 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.141791 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.141864 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg4s2\" (UniqueName: \"kubernetes.io/projected/35f68976-d7c9-453e-a02c-cd4119a55e3b-kube-api-access-hg4s2\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.141894 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.149928 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.155224 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.159604 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg4s2\" (UniqueName: \"kubernetes.io/projected/35f68976-d7c9-453e-a02c-cd4119a55e3b-kube-api-access-hg4s2\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.180765 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.704409 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.780187 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"35f68976-d7c9-453e-a02c-cd4119a55e3b","Type":"ContainerStarted","Data":"d7166f91e2c58a7c0393d358a9d02c9055600e84476d26b20a2d3fae382bf66f"} Jan 30 08:52:07 crc kubenswrapper[4758]: I0130 08:52:07.519580 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:07 crc kubenswrapper[4758]: I0130 08:52:07.789546 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"35f68976-d7c9-453e-a02c-cd4119a55e3b","Type":"ContainerStarted","Data":"57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e"} Jan 30 08:52:07 crc kubenswrapper[4758]: I0130 08:52:07.790656 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:07 crc kubenswrapper[4758]: I0130 08:52:07.792440 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerStarted","Data":"64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27"} Jan 30 08:52:07 crc kubenswrapper[4758]: I0130 08:52:07.793675 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:52:07 crc kubenswrapper[4758]: I0130 08:52:07.820607 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.8205847569999998 podStartE2EDuration="2.820584757s" podCreationTimestamp="2026-01-30 08:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:52:07.810422774 +0000 UTC m=+1332.782734325" watchObservedRunningTime="2026-01-30 08:52:07.820584757 +0000 UTC m=+1332.792896308" Jan 30 08:52:07 crc kubenswrapper[4758]: I0130 08:52:07.861226 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.782653614 podStartE2EDuration="5.861206438s" podCreationTimestamp="2026-01-30 08:52:02 +0000 UTC" firstStartedPulling="2026-01-30 08:52:02.917110507 +0000 UTC m=+1327.889422058" lastFinishedPulling="2026-01-30 08:52:06.995663341 +0000 UTC m=+1331.967974882" observedRunningTime="2026-01-30 08:52:07.856416405 +0000 UTC m=+1332.828727956" watchObservedRunningTime="2026-01-30 08:52:07.861206438 +0000 UTC m=+1332.833517989" Jan 30 08:52:08 crc kubenswrapper[4758]: I0130 08:52:08.801735 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" containerID="cri-o://57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" gracePeriod=30 Jan 30 08:52:10 crc kubenswrapper[4758]: I0130 08:52:10.231773 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:10 crc kubenswrapper[4758]: I0130 08:52:10.820983 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="ceilometer-central-agent" containerID="cri-o://7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e" gracePeriod=30 Jan 30 08:52:10 crc kubenswrapper[4758]: I0130 08:52:10.821125 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="proxy-httpd" containerID="cri-o://64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27" gracePeriod=30 Jan 30 08:52:10 crc kubenswrapper[4758]: I0130 08:52:10.821159 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="sg-core" containerID="cri-o://0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241" gracePeriod=30 Jan 30 08:52:10 crc kubenswrapper[4758]: I0130 08:52:10.821266 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="ceilometer-notification-agent" containerID="cri-o://e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54" gracePeriod=30 Jan 30 08:52:11 crc kubenswrapper[4758]: E0130 08:52:11.183813 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:11 crc kubenswrapper[4758]: E0130 08:52:11.189001 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:11 crc kubenswrapper[4758]: E0130 08:52:11.191483 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:11 crc kubenswrapper[4758]: E0130 08:52:11.191594 4758 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:11 crc kubenswrapper[4758]: I0130 08:52:11.833755 4758 generic.go:334] "Generic (PLEG): container finished" podID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerID="64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27" exitCode=0 Jan 30 08:52:11 crc kubenswrapper[4758]: I0130 08:52:11.834009 4758 generic.go:334] "Generic (PLEG): container finished" podID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerID="0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241" exitCode=2 Jan 30 08:52:11 crc kubenswrapper[4758]: I0130 08:52:11.834019 4758 generic.go:334] "Generic (PLEG): container finished" podID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerID="e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54" exitCode=0 Jan 30 08:52:11 crc kubenswrapper[4758]: I0130 08:52:11.833853 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerDied","Data":"64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27"} Jan 30 08:52:11 crc kubenswrapper[4758]: I0130 08:52:11.834065 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerDied","Data":"0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241"} Jan 30 08:52:11 crc kubenswrapper[4758]: I0130 08:52:11.834075 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerDied","Data":"e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54"} Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.309420 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.407743 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-config-data\") pod \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.407894 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-run-httpd\") pod \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.407923 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-scripts\") pod \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.408002 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-combined-ca-bundle\") pod \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.408059 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-sg-core-conf-yaml\") pod \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.408077 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-log-httpd\") pod \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.408187 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xtwk\" (UniqueName: \"kubernetes.io/projected/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-kube-api-access-8xtwk\") pod \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.408257 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" (UID: "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.408797 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" (UID: "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.409336 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.409352 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.426838 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-kube-api-access-8xtwk" (OuterVolumeSpecName: "kube-api-access-8xtwk") pod "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" (UID: "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9"). InnerVolumeSpecName "kube-api-access-8xtwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.428305 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-scripts" (OuterVolumeSpecName: "scripts") pod "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" (UID: "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.441725 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" (UID: "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.475547 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" (UID: "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.496835 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-config-data" (OuterVolumeSpecName: "config-data") pod "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" (UID: "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.511681 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.511720 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.511732 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xtwk\" (UniqueName: \"kubernetes.io/projected/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-kube-api-access-8xtwk\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.511745 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.511756 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.846960 4758 generic.go:334] "Generic (PLEG): container finished" podID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerID="7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e" exitCode=0 Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.847001 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.847018 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerDied","Data":"7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e"} Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.856195 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerDied","Data":"65dc36a2f4e4d15659ca98b06b09b9503be01a5727ed309a172875a20e2d3981"} Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.856223 4758 scope.go:117] "RemoveContainer" containerID="64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.902397 4758 scope.go:117] "RemoveContainer" containerID="0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.907610 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.921906 4758 scope.go:117] "RemoveContainer" containerID="e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.926889 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.950703 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:12 crc kubenswrapper[4758]: E0130 08:52:12.951064 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="proxy-httpd" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.951080 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="proxy-httpd" Jan 30 08:52:12 crc kubenswrapper[4758]: E0130 08:52:12.951099 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="sg-core" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.951107 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="sg-core" Jan 30 08:52:12 crc kubenswrapper[4758]: E0130 08:52:12.951122 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="ceilometer-notification-agent" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.951127 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="ceilometer-notification-agent" Jan 30 08:52:12 crc kubenswrapper[4758]: E0130 08:52:12.951134 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="ceilometer-central-agent" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.951139 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="ceilometer-central-agent" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.951296 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="proxy-httpd" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.951312 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="ceilometer-central-agent" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.951319 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="ceilometer-notification-agent" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.951326 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="sg-core" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.952835 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.961990 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.962462 4758 scope.go:117] "RemoveContainer" containerID="7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.963317 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.032134 4758 scope.go:117] "RemoveContainer" containerID="64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27" Jan 30 08:52:13 crc kubenswrapper[4758]: E0130 08:52:13.032814 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27\": container with ID starting with 64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27 not found: ID does not exist" containerID="64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.032878 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27"} err="failed to get container status \"64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27\": rpc error: code = NotFound desc = could not find container \"64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27\": container with ID starting with 64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27 not found: ID does not exist" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.032911 4758 scope.go:117] "RemoveContainer" containerID="0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241" Jan 30 08:52:13 crc kubenswrapper[4758]: E0130 08:52:13.033417 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241\": container with ID starting with 0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241 not found: ID does not exist" containerID="0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.033472 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241"} err="failed to get container status \"0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241\": rpc error: code = NotFound desc = could not find container \"0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241\": container with ID starting with 0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241 not found: ID does not exist" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.033506 4758 scope.go:117] "RemoveContainer" containerID="e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54" Jan 30 08:52:13 crc kubenswrapper[4758]: E0130 08:52:13.033869 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54\": container with ID starting with e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54 not found: ID does not exist" containerID="e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.033896 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54"} err="failed to get container status \"e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54\": rpc error: code = NotFound desc = could not find container \"e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54\": container with ID starting with e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54 not found: ID does not exist" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.033911 4758 scope.go:117] "RemoveContainer" containerID="7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e" Jan 30 08:52:13 crc kubenswrapper[4758]: E0130 08:52:13.034263 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e\": container with ID starting with 7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e not found: ID does not exist" containerID="7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.034292 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e"} err="failed to get container status \"7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e\": rpc error: code = NotFound desc = could not find container \"7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e\": container with ID starting with 7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e not found: ID does not exist" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.101155 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.128706 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-scripts\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.129540 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-log-httpd\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.129648 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.129757 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-run-httpd\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.129862 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpqsg\" (UniqueName: \"kubernetes.io/projected/e3078c00-f908-4759-86c3-ce6109c669c9-kube-api-access-mpqsg\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.132367 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-config-data\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.132535 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.234489 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-run-httpd\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.234562 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpqsg\" (UniqueName: \"kubernetes.io/projected/e3078c00-f908-4759-86c3-ce6109c669c9-kube-api-access-mpqsg\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.234638 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-config-data\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.234690 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.234733 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-scripts\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.234761 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-log-httpd\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.234782 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.235496 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-run-httpd\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.235664 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-log-httpd\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.241021 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-config-data\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.245923 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-scripts\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.254827 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.256956 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.261948 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpqsg\" (UniqueName: \"kubernetes.io/projected/e3078c00-f908-4759-86c3-ce6109c669c9-kube-api-access-mpqsg\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.334247 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.782007 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" path="/var/lib/kubelet/pods/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9/volumes" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.819932 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.863417 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerStarted","Data":"740c53299b30df6a094dca8b4457e862311172f260474801e83e635d2a8a2502"} Jan 30 08:52:14 crc kubenswrapper[4758]: I0130 08:52:14.873717 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerStarted","Data":"4a1277446a2960bf740c100d86ca47dbf20eb3e4bde61a0d988583f9b578d9e7"} Jan 30 08:52:15 crc kubenswrapper[4758]: I0130 08:52:15.885985 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerStarted","Data":"98a7d35980c739006609bb2df4d25cb95044049a6931e6f8f91cb240bd06c8a4"} Jan 30 08:52:16 crc kubenswrapper[4758]: E0130 08:52:16.200569 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:16 crc kubenswrapper[4758]: E0130 08:52:16.205708 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:16 crc kubenswrapper[4758]: E0130 08:52:16.216605 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:16 crc kubenswrapper[4758]: E0130 08:52:16.216677 4758 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:16 crc kubenswrapper[4758]: I0130 08:52:16.898904 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerStarted","Data":"aa56c2aceef9856376d1db7b6df91462c755abb71029db070b163f5d024efaff"} Jan 30 08:52:16 crc kubenswrapper[4758]: I0130 08:52:16.901421 4758 generic.go:334] "Generic (PLEG): container finished" podID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerID="a1146f1ee627c842c00198864692549d9cec1a00d0f932a99928483d26346150" exitCode=137 Jan 30 08:52:16 crc kubenswrapper[4758]: I0130 08:52:16.901475 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf698bb7b-gp87v" event={"ID":"97906db2-3b2d-44ec-af77-d3edf75b7f76","Type":"ContainerDied","Data":"a1146f1ee627c842c00198864692549d9cec1a00d0f932a99928483d26346150"} Jan 30 08:52:16 crc kubenswrapper[4758]: I0130 08:52:16.901502 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf698bb7b-gp87v" event={"ID":"97906db2-3b2d-44ec-af77-d3edf75b7f76","Type":"ContainerStarted","Data":"08a2a2ba15e166e2f950a796369d8a82583e326b34dda5dca164aed61eef649b"} Jan 30 08:52:16 crc kubenswrapper[4758]: I0130 08:52:16.901517 4758 scope.go:117] "RemoveContainer" containerID="33c5db16903a0ea24fc43801695a658eca16a53583c2497012de7812e4d3cb27" Jan 30 08:52:16 crc kubenswrapper[4758]: I0130 08:52:16.928983 4758 generic.go:334] "Generic (PLEG): container finished" podID="365b123c-aa7f-464d-b659-78154f86d42f" containerID="e8e2a20c9ea4e39eefa6de1113690b21340a717486562346be315bf42b27f2b0" exitCode=137 Jan 30 08:52:16 crc kubenswrapper[4758]: I0130 08:52:16.929055 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerDied","Data":"e8e2a20c9ea4e39eefa6de1113690b21340a717486562346be315bf42b27f2b0"} Jan 30 08:52:16 crc kubenswrapper[4758]: I0130 08:52:16.929098 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerStarted","Data":"f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b"} Jan 30 08:52:17 crc kubenswrapper[4758]: I0130 08:52:17.126369 4758 scope.go:117] "RemoveContainer" containerID="5ac24c2e0d10a94520a56718b5cb7e279d3222a4776ef2910ce318c3d652e18a" Jan 30 08:52:18 crc kubenswrapper[4758]: I0130 08:52:18.958467 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerStarted","Data":"ba5c3e6a8e07b0dd57a9e4b2dce16091167ff8d6309149424a387d5848c250d7"} Jan 30 08:52:18 crc kubenswrapper[4758]: I0130 08:52:18.959137 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:52:18 crc kubenswrapper[4758]: I0130 08:52:18.986762 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.115802566 podStartE2EDuration="6.986733195s" podCreationTimestamp="2026-01-30 08:52:12 +0000 UTC" firstStartedPulling="2026-01-30 08:52:13.829860847 +0000 UTC m=+1338.802172398" lastFinishedPulling="2026-01-30 08:52:17.700791476 +0000 UTC m=+1342.673103027" observedRunningTime="2026-01-30 08:52:18.982549564 +0000 UTC m=+1343.954861115" watchObservedRunningTime="2026-01-30 08:52:18.986733195 +0000 UTC m=+1343.959044746" Jan 30 08:52:20 crc kubenswrapper[4758]: E0130 08:52:20.406144 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="f978baf9-b7c0-4d25-8bca-e95a018ba2af" Jan 30 08:52:20 crc kubenswrapper[4758]: I0130 08:52:20.977414 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 08:52:21 crc kubenswrapper[4758]: E0130 08:52:21.183226 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:21 crc kubenswrapper[4758]: E0130 08:52:21.184583 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:21 crc kubenswrapper[4758]: E0130 08:52:21.196271 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:21 crc kubenswrapper[4758]: E0130 08:52:21.196383 4758 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.388614 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.389054 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.389116 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.390195 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b01f58947342357281d9de34ab8ce6d30a071f097fc6b75d76a38795a72373c2"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.390262 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://b01f58947342357281d9de34ab8ce6d30a071f097fc6b75d76a38795a72373c2" gracePeriod=600 Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.997018 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="b01f58947342357281d9de34ab8ce6d30a071f097fc6b75d76a38795a72373c2" exitCode=0 Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.997437 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"b01f58947342357281d9de34ab8ce6d30a071f097fc6b75d76a38795a72373c2"} Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.997474 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275"} Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.997496 4758 scope.go:117] "RemoveContainer" containerID="30f01f1e437d64a8924bada24b4f475f6ae60692a25492cb418ecb9bb3c281c2" Jan 30 08:52:24 crc kubenswrapper[4758]: I0130 08:52:24.067383 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:52:24 crc kubenswrapper[4758]: E0130 08:52:24.067638 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:52:24 crc kubenswrapper[4758]: E0130 08:52:24.067952 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:52:24 crc kubenswrapper[4758]: E0130 08:52:24.068029 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:54:26.068007338 +0000 UTC m=+1471.040318899 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:52:26 crc kubenswrapper[4758]: I0130 08:52:26.009841 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:52:26 crc kubenswrapper[4758]: I0130 08:52:26.010190 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:52:26 crc kubenswrapper[4758]: I0130 08:52:26.010608 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:52:26 crc kubenswrapper[4758]: I0130 08:52:26.043107 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:52:26 crc kubenswrapper[4758]: I0130 08:52:26.044248 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:52:26 crc kubenswrapper[4758]: I0130 08:52:26.045904 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 08:52:26 crc kubenswrapper[4758]: E0130 08:52:26.183242 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:26 crc kubenswrapper[4758]: E0130 08:52:26.192613 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:26 crc kubenswrapper[4758]: E0130 08:52:26.194168 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:26 crc kubenswrapper[4758]: E0130 08:52:26.194232 4758 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:31 crc kubenswrapper[4758]: E0130 08:52:31.184905 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:31 crc kubenswrapper[4758]: E0130 08:52:31.206030 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:31 crc kubenswrapper[4758]: E0130 08:52:31.225748 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:31 crc kubenswrapper[4758]: E0130 08:52:31.225869 4758 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:36 crc kubenswrapper[4758]: I0130 08:52:36.019180 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:52:36 crc kubenswrapper[4758]: I0130 08:52:36.043709 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 08:52:36 crc kubenswrapper[4758]: E0130 08:52:36.184115 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:36 crc kubenswrapper[4758]: E0130 08:52:36.185911 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:36 crc kubenswrapper[4758]: E0130 08:52:36.188503 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:36 crc kubenswrapper[4758]: E0130 08:52:36.188566 4758 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.165863 4758 generic.go:334] "Generic (PLEG): container finished" podID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" exitCode=137 Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.165950 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"35f68976-d7c9-453e-a02c-cd4119a55e3b","Type":"ContainerDied","Data":"57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e"} Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.794776 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.879110 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg4s2\" (UniqueName: \"kubernetes.io/projected/35f68976-d7c9-453e-a02c-cd4119a55e3b-kube-api-access-hg4s2\") pod \"35f68976-d7c9-453e-a02c-cd4119a55e3b\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.879348 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-combined-ca-bundle\") pod \"35f68976-d7c9-453e-a02c-cd4119a55e3b\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.879380 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-config-data\") pod \"35f68976-d7c9-453e-a02c-cd4119a55e3b\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.890301 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f68976-d7c9-453e-a02c-cd4119a55e3b-kube-api-access-hg4s2" (OuterVolumeSpecName: "kube-api-access-hg4s2") pod "35f68976-d7c9-453e-a02c-cd4119a55e3b" (UID: "35f68976-d7c9-453e-a02c-cd4119a55e3b"). InnerVolumeSpecName "kube-api-access-hg4s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.918777 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-config-data" (OuterVolumeSpecName: "config-data") pod "35f68976-d7c9-453e-a02c-cd4119a55e3b" (UID: "35f68976-d7c9-453e-a02c-cd4119a55e3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.920333 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35f68976-d7c9-453e-a02c-cd4119a55e3b" (UID: "35f68976-d7c9-453e-a02c-cd4119a55e3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.981844 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg4s2\" (UniqueName: \"kubernetes.io/projected/35f68976-d7c9-453e-a02c-cd4119a55e3b-kube-api-access-hg4s2\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.982402 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.982414 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.177154 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.177147 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"35f68976-d7c9-453e-a02c-cd4119a55e3b","Type":"ContainerDied","Data":"d7166f91e2c58a7c0393d358a9d02c9055600e84476d26b20a2d3fae382bf66f"} Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.177909 4758 scope.go:117] "RemoveContainer" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.229304 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.239621 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.262718 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:40 crc kubenswrapper[4758]: E0130 08:52:40.263276 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.263296 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.263533 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.264395 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.268957 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.271158 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wlml7" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.298049 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.394390 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5c8d4f1-2007-458e-a918-35eea3933622-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.394516 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frv6z\" (UniqueName: \"kubernetes.io/projected/d5c8d4f1-2007-458e-a918-35eea3933622-kube-api-access-frv6z\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.394606 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c8d4f1-2007-458e-a918-35eea3933622-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.497175 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5c8d4f1-2007-458e-a918-35eea3933622-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.497323 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frv6z\" (UniqueName: \"kubernetes.io/projected/d5c8d4f1-2007-458e-a918-35eea3933622-kube-api-access-frv6z\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.497396 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c8d4f1-2007-458e-a918-35eea3933622-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.503073 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c8d4f1-2007-458e-a918-35eea3933622-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.510918 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5c8d4f1-2007-458e-a918-35eea3933622-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.530253 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frv6z\" (UniqueName: \"kubernetes.io/projected/d5c8d4f1-2007-458e-a918-35eea3933622-kube-api-access-frv6z\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.585177 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:41 crc kubenswrapper[4758]: W0130 08:52:41.459725 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5c8d4f1_2007_458e_a918_35eea3933622.slice/crio-b03f328b226d9a2cb18aafa025c241ed77fb311ea4dd0a9767b7308a1e26096f WatchSource:0}: Error finding container b03f328b226d9a2cb18aafa025c241ed77fb311ea4dd0a9767b7308a1e26096f: Status 404 returned error can't find the container with id b03f328b226d9a2cb18aafa025c241ed77fb311ea4dd0a9767b7308a1e26096f Jan 30 08:52:41 crc kubenswrapper[4758]: I0130 08:52:41.466867 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:41 crc kubenswrapper[4758]: I0130 08:52:41.779465 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" path="/var/lib/kubelet/pods/35f68976-d7c9-453e-a02c-cd4119a55e3b/volumes" Jan 30 08:52:42 crc kubenswrapper[4758]: I0130 08:52:42.206024 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d5c8d4f1-2007-458e-a918-35eea3933622","Type":"ContainerStarted","Data":"7d3208a4a87c7be40628ac1af32d22786c33021117db6335d3b791365110b969"} Jan 30 08:52:42 crc kubenswrapper[4758]: I0130 08:52:42.206132 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d5c8d4f1-2007-458e-a918-35eea3933622","Type":"ContainerStarted","Data":"b03f328b226d9a2cb18aafa025c241ed77fb311ea4dd0a9767b7308a1e26096f"} Jan 30 08:52:42 crc kubenswrapper[4758]: I0130 08:52:42.206168 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:42 crc kubenswrapper[4758]: I0130 08:52:42.224887 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.224869144 podStartE2EDuration="2.224869144s" podCreationTimestamp="2026-01-30 08:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:52:42.222729227 +0000 UTC m=+1367.195040788" watchObservedRunningTime="2026-01-30 08:52:42.224869144 +0000 UTC m=+1367.197180685" Jan 30 08:52:43 crc kubenswrapper[4758]: I0130 08:52:43.342160 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 08:52:47 crc kubenswrapper[4758]: I0130 08:52:47.527281 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:52:47 crc kubenswrapper[4758]: I0130 08:52:47.528084 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="cbb296cf-3469-43c3-9ebe-8fd1d31c00a6" containerName="kube-state-metrics" containerID="cri-o://a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9" gracePeriod=30 Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.166609 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.256089 4758 generic.go:334] "Generic (PLEG): container finished" podID="cbb296cf-3469-43c3-9ebe-8fd1d31c00a6" containerID="a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9" exitCode=2 Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.256139 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6","Type":"ContainerDied","Data":"a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9"} Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.256175 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6","Type":"ContainerDied","Data":"d0e1a0f857fe6a3aca4ee7d7173a8d5441df3a299b48556a5a5526d0a92355f7"} Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.256193 4758 scope.go:117] "RemoveContainer" containerID="a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.256325 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.264244 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c952\" (UniqueName: \"kubernetes.io/projected/cbb296cf-3469-43c3-9ebe-8fd1d31c00a6-kube-api-access-5c952\") pod \"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6\" (UID: \"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6\") " Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.294928 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbb296cf-3469-43c3-9ebe-8fd1d31c00a6-kube-api-access-5c952" (OuterVolumeSpecName: "kube-api-access-5c952") pod "cbb296cf-3469-43c3-9ebe-8fd1d31c00a6" (UID: "cbb296cf-3469-43c3-9ebe-8fd1d31c00a6"). InnerVolumeSpecName "kube-api-access-5c952". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.309190 4758 scope.go:117] "RemoveContainer" containerID="a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9" Jan 30 08:52:48 crc kubenswrapper[4758]: E0130 08:52:48.310831 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9\": container with ID starting with a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9 not found: ID does not exist" containerID="a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.310877 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9"} err="failed to get container status \"a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9\": rpc error: code = NotFound desc = could not find container \"a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9\": container with ID starting with a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9 not found: ID does not exist" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.367247 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c952\" (UniqueName: \"kubernetes.io/projected/cbb296cf-3469-43c3-9ebe-8fd1d31c00a6-kube-api-access-5c952\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.591350 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.605274 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.617499 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:52:48 crc kubenswrapper[4758]: E0130 08:52:48.617892 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb296cf-3469-43c3-9ebe-8fd1d31c00a6" containerName="kube-state-metrics" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.617914 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb296cf-3469-43c3-9ebe-8fd1d31c00a6" containerName="kube-state-metrics" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.618129 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbb296cf-3469-43c3-9ebe-8fd1d31c00a6" containerName="kube-state-metrics" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.618721 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.620842 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.632504 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.721278 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.773336 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.773448 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.773490 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.773653 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94f4\" (UniqueName: \"kubernetes.io/projected/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-api-access-c94f4\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.875675 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.875799 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.875843 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.875906 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c94f4\" (UniqueName: \"kubernetes.io/projected/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-api-access-c94f4\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.880050 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.880559 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.886452 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.900584 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c94f4\" (UniqueName: \"kubernetes.io/projected/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-api-access-c94f4\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.936128 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:52:49 crc kubenswrapper[4758]: I0130 08:52:49.440027 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:52:49 crc kubenswrapper[4758]: I0130 08:52:49.778161 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbb296cf-3469-43c3-9ebe-8fd1d31c00a6" path="/var/lib/kubelet/pods/cbb296cf-3469-43c3-9ebe-8fd1d31c00a6/volumes" Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.002806 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.003423 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="ceilometer-central-agent" containerID="cri-o://4a1277446a2960bf740c100d86ca47dbf20eb3e4bde61a0d988583f9b578d9e7" gracePeriod=30 Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.003551 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="proxy-httpd" containerID="cri-o://ba5c3e6a8e07b0dd57a9e4b2dce16091167ff8d6309149424a387d5848c250d7" gracePeriod=30 Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.003644 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="ceilometer-notification-agent" containerID="cri-o://98a7d35980c739006609bb2df4d25cb95044049a6931e6f8f91cb240bd06c8a4" gracePeriod=30 Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.003782 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="sg-core" containerID="cri-o://aa56c2aceef9856376d1db7b6df91462c755abb71029db070b163f5d024efaff" gracePeriod=30 Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.309185 4758 generic.go:334] "Generic (PLEG): container finished" podID="e3078c00-f908-4759-86c3-ce6109c669c9" containerID="ba5c3e6a8e07b0dd57a9e4b2dce16091167ff8d6309149424a387d5848c250d7" exitCode=0 Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.309227 4758 generic.go:334] "Generic (PLEG): container finished" podID="e3078c00-f908-4759-86c3-ce6109c669c9" containerID="aa56c2aceef9856376d1db7b6df91462c755abb71029db070b163f5d024efaff" exitCode=2 Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.309278 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerDied","Data":"ba5c3e6a8e07b0dd57a9e4b2dce16091167ff8d6309149424a387d5848c250d7"} Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.309311 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerDied","Data":"aa56c2aceef9856376d1db7b6df91462c755abb71029db070b163f5d024efaff"} Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.312533 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"402b7d3e-d66f-412a-a3a8-4c45a9a47628","Type":"ContainerStarted","Data":"a04a59d8b62c3252a6690199f8633042a43e4a9415c8b3ff8e67be1189cc7df5"} Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.614814 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.689288 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.711427 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.323212 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"402b7d3e-d66f-412a-a3a8-4c45a9a47628","Type":"ContainerStarted","Data":"4714c8bdf10f0594c4277ef50d8cc0f86fd1275aa644540ed93918cd31f79fb4"} Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.323320 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.326674 4758 generic.go:334] "Generic (PLEG): container finished" podID="e3078c00-f908-4759-86c3-ce6109c669c9" containerID="4a1277446a2960bf740c100d86ca47dbf20eb3e4bde61a0d988583f9b578d9e7" exitCode=0 Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.326725 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerDied","Data":"4a1277446a2960bf740c100d86ca47dbf20eb3e4bde61a0d988583f9b578d9e7"} Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.360680 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.633367436 podStartE2EDuration="3.360658289s" podCreationTimestamp="2026-01-30 08:52:48 +0000 UTC" firstStartedPulling="2026-01-30 08:52:49.463334023 +0000 UTC m=+1374.435645574" lastFinishedPulling="2026-01-30 08:52:50.190624876 +0000 UTC m=+1375.162936427" observedRunningTime="2026-01-30 08:52:51.345084982 +0000 UTC m=+1376.317396553" watchObservedRunningTime="2026-01-30 08:52:51.360658289 +0000 UTC m=+1376.332969840" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.475382 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-xp8d6"] Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.476624 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.482212 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.484174 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.494234 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xp8d6"] Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.636181 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-scripts\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.636265 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfl8m\" (UniqueName: \"kubernetes.io/projected/9c7907da-98f4-46c2-9089-7227516cf739-kube-api-access-sfl8m\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.636286 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.636343 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-config-data\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.737828 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-scripts\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.737908 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfl8m\" (UniqueName: \"kubernetes.io/projected/9c7907da-98f4-46c2-9089-7227516cf739-kube-api-access-sfl8m\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.737926 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.738367 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-config-data\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.743799 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-scripts\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.749742 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.768688 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-config-data\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.847236 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfl8m\" (UniqueName: \"kubernetes.io/projected/9c7907da-98f4-46c2-9089-7227516cf739-kube-api-access-sfl8m\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.927402 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.928584 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.957451 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.968678 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.970455 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: W0130 08:52:52.028352 4758 reflector.go:561] object-"openstack"/"nova-api-config-data": failed to list *v1.Secret: secrets "nova-api-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 30 08:52:52 crc kubenswrapper[4758]: E0130 08:52:52.028590 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-api-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.029877 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.043955 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dglqr\" (UniqueName: \"kubernetes.io/projected/7fbb8129-e72f-4925-a38a-06505fe53fb3-kube-api-access-dglqr\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.044113 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-config-data\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.044135 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.092692 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.097485 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.146106 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwnt5\" (UniqueName: \"kubernetes.io/projected/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-kube-api-access-rwnt5\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.146164 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-logs\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.146239 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-config-data\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.146258 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.146304 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-config-data\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.146329 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.146362 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dglqr\" (UniqueName: \"kubernetes.io/projected/7fbb8129-e72f-4925-a38a-06505fe53fb3-kube-api-access-dglqr\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.169435 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-config-data\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.186494 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.245893 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dglqr\" (UniqueName: \"kubernetes.io/projected/7fbb8129-e72f-4925-a38a-06505fe53fb3-kube-api-access-dglqr\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.246353 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.248953 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-config-data\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.249020 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.249164 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwnt5\" (UniqueName: \"kubernetes.io/projected/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-kube-api-access-rwnt5\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.249199 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-logs\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.249868 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-logs\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.257766 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.325403 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwnt5\" (UniqueName: \"kubernetes.io/projected/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-kube-api-access-rwnt5\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.395927 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.401513 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.410251 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.440983 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.558360 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.558443 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.558472 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqbrx\" (UniqueName: \"kubernetes.io/projected/0b4a1798-01a8-4e2f-8c93-ee4053777f75-kube-api-access-qqbrx\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.672137 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.672574 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.672600 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqbrx\" (UniqueName: \"kubernetes.io/projected/0b4a1798-01a8-4e2f-8c93-ee4053777f75-kube-api-access-qqbrx\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.684998 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.703755 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.711881 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqbrx\" (UniqueName: \"kubernetes.io/projected/0b4a1798-01a8-4e2f-8c93-ee4053777f75-kube-api-access-qqbrx\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.746650 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.863487 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.869636 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.885876 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.889015 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s284j\" (UniqueName: \"kubernetes.io/projected/01bcaf88-537a-47cd-b50d-6c95e392f2a8-kube-api-access-s284j\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.889169 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-config-data\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.899215 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01bcaf88-537a-47cd-b50d-6c95e392f2a8-logs\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.899529 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.934031 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.955114 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.992509 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-config-data\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.011610 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s284j\" (UniqueName: \"kubernetes.io/projected/01bcaf88-537a-47cd-b50d-6c95e392f2a8-kube-api-access-s284j\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.013157 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-config-data\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.013348 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01bcaf88-537a-47cd-b50d-6c95e392f2a8-logs\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.019676 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.014325 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01bcaf88-537a-47cd-b50d-6c95e392f2a8-logs\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.044566 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.066514 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-config-data\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.068359 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s284j\" (UniqueName: \"kubernetes.io/projected/01bcaf88-537a-47cd-b50d-6c95e392f2a8-kube-api-access-s284j\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.102649 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xp8d6"] Jan 30 08:52:53 crc kubenswrapper[4758]: W0130 08:52:53.148537 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c7907da_98f4_46c2_9089_7227516cf739.slice/crio-9662113a5e6c82a0672966fefbaee39c41d94a66b3bff675c82499c15dc32a24 WatchSource:0}: Error finding container 9662113a5e6c82a0672966fefbaee39c41d94a66b3bff675c82499c15dc32a24: Status 404 returned error can't find the container with id 9662113a5e6c82a0672966fefbaee39c41d94a66b3bff675c82499c15dc32a24 Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.186624 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.234553 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.299536 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c6ccb6797-l4vl5"] Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.301910 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.369940 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-nb\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.370412 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-config\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.370560 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-sb\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.370639 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwttp\" (UniqueName: \"kubernetes.io/projected/88205fec-5592-41f1-a351-daf34b97add7-kube-api-access-gwttp\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.370754 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-dns-svc\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.409429 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6ccb6797-l4vl5"] Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.573823 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-config\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.574776 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-sb\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.574888 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwttp\" (UniqueName: \"kubernetes.io/projected/88205fec-5592-41f1-a351-daf34b97add7-kube-api-access-gwttp\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.574980 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-dns-svc\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.575091 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-nb\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.588534 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-nb\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.590761 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-dns-svc\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.600926 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-config\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.603024 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-sb\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.612663 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xp8d6" event={"ID":"9c7907da-98f4-46c2-9089-7227516cf739","Type":"ContainerStarted","Data":"9662113a5e6c82a0672966fefbaee39c41d94a66b3bff675c82499c15dc32a24"} Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.616328 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwttp\" (UniqueName: \"kubernetes.io/projected/88205fec-5592-41f1-a351-daf34b97add7-kube-api-access-gwttp\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.620669 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.621743 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.937009 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.197632 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.237731 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.481156 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-95vnq"] Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.484008 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.491963 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.495675 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.498487 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-95vnq"] Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.644416 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-scripts\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.644478 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.644510 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-config-data\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.644557 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq455\" (UniqueName: \"kubernetes.io/projected/42581858-ead1-4898-9fad-72411bf3c6a4-kube-api-access-kq455\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.649185 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6ccb6797-l4vl5"] Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.654852 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a20097fd-7119-487f-8a17-6b1f7f0f5bc7","Type":"ContainerStarted","Data":"0554dd0e1bf67b74e082f84d57d630b9e35218363af5e1129b9eccdc1687b263"} Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.663387 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7fbb8129-e72f-4925-a38a-06505fe53fb3","Type":"ContainerStarted","Data":"f59aee87045b6560ae771d39cafdf29ecf7529163d23a6f08629ad7171493e30"} Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.673598 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01bcaf88-537a-47cd-b50d-6c95e392f2a8","Type":"ContainerStarted","Data":"2c2f12b4fc7394d7f62cb7f9a1ba9314155b34b91f9d418d8afd7c811d330122"} Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.689496 4758 generic.go:334] "Generic (PLEG): container finished" podID="e3078c00-f908-4759-86c3-ce6109c669c9" containerID="98a7d35980c739006609bb2df4d25cb95044049a6931e6f8f91cb240bd06c8a4" exitCode=0 Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.689573 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerDied","Data":"98a7d35980c739006609bb2df4d25cb95044049a6931e6f8f91cb240bd06c8a4"} Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.690895 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0b4a1798-01a8-4e2f-8c93-ee4053777f75","Type":"ContainerStarted","Data":"0facd9f58ad103e6d6bf1af4bf8a03e376d0d7dc402e1ef4b1234121e0c87d41"} Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.696566 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xp8d6" event={"ID":"9c7907da-98f4-46c2-9089-7227516cf739","Type":"ContainerStarted","Data":"87c0147985c0330380f0c62d6ffa802a17a1d7f52af945bd4b80ccd53693d21e"} Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.728944 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-xp8d6" podStartSLOduration=3.728910153 podStartE2EDuration="3.728910153s" podCreationTimestamp="2026-01-30 08:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:52:54.723517694 +0000 UTC m=+1379.695829245" watchObservedRunningTime="2026-01-30 08:52:54.728910153 +0000 UTC m=+1379.701221704" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.748263 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-scripts\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.748331 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.748375 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-config-data\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.748404 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq455\" (UniqueName: \"kubernetes.io/projected/42581858-ead1-4898-9fad-72411bf3c6a4-kube-api-access-kq455\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.762013 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-config-data\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.762907 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-scripts\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.763410 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.772936 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq455\" (UniqueName: \"kubernetes.io/projected/42581858-ead1-4898-9fad-72411bf3c6a4-kube-api-access-kq455\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.829307 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.840230 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.951987 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-combined-ca-bundle\") pod \"e3078c00-f908-4759-86c3-ce6109c669c9\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.952122 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-run-httpd\") pod \"e3078c00-f908-4759-86c3-ce6109c669c9\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.952166 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-scripts\") pod \"e3078c00-f908-4759-86c3-ce6109c669c9\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.952218 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-log-httpd\") pod \"e3078c00-f908-4759-86c3-ce6109c669c9\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.952452 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-sg-core-conf-yaml\") pod \"e3078c00-f908-4759-86c3-ce6109c669c9\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.952517 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpqsg\" (UniqueName: \"kubernetes.io/projected/e3078c00-f908-4759-86c3-ce6109c669c9-kube-api-access-mpqsg\") pod \"e3078c00-f908-4759-86c3-ce6109c669c9\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.952585 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-config-data\") pod \"e3078c00-f908-4759-86c3-ce6109c669c9\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.954913 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e3078c00-f908-4759-86c3-ce6109c669c9" (UID: "e3078c00-f908-4759-86c3-ce6109c669c9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.959154 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e3078c00-f908-4759-86c3-ce6109c669c9" (UID: "e3078c00-f908-4759-86c3-ce6109c669c9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.984907 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3078c00-f908-4759-86c3-ce6109c669c9-kube-api-access-mpqsg" (OuterVolumeSpecName: "kube-api-access-mpqsg") pod "e3078c00-f908-4759-86c3-ce6109c669c9" (UID: "e3078c00-f908-4759-86c3-ce6109c669c9"). InnerVolumeSpecName "kube-api-access-mpqsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.011504 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-scripts" (OuterVolumeSpecName: "scripts") pod "e3078c00-f908-4759-86c3-ce6109c669c9" (UID: "e3078c00-f908-4759-86c3-ce6109c669c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.062251 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpqsg\" (UniqueName: \"kubernetes.io/projected/e3078c00-f908-4759-86c3-ce6109c669c9-kube-api-access-mpqsg\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.062534 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.062651 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.064155 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.072064 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e3078c00-f908-4759-86c3-ce6109c669c9" (UID: "e3078c00-f908-4759-86c3-ce6109c669c9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.167767 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.359242 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-config-data" (OuterVolumeSpecName: "config-data") pod "e3078c00-f908-4759-86c3-ce6109c669c9" (UID: "e3078c00-f908-4759-86c3-ce6109c669c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.373342 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.416964 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3078c00-f908-4759-86c3-ce6109c669c9" (UID: "e3078c00-f908-4759-86c3-ce6109c669c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.483424 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.496419 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.663327 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-95vnq"] Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.696281 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.773528 4758 generic.go:334] "Generic (PLEG): container finished" podID="88205fec-5592-41f1-a351-daf34b97add7" containerID="a9b45fc847ddf846195914d0ffbe6a09a108519311beafcfe30dbeedd9e7ad3b" exitCode=0 Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.773668 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" event={"ID":"88205fec-5592-41f1-a351-daf34b97add7","Type":"ContainerDied","Data":"a9b45fc847ddf846195914d0ffbe6a09a108519311beafcfe30dbeedd9e7ad3b"} Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.773719 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" event={"ID":"88205fec-5592-41f1-a351-daf34b97add7","Type":"ContainerStarted","Data":"44f8b069d0be92a8bc294a5f42966c64f273339f26c92d95f7dfd0fd9149bca0"} Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.849779 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.874451 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerDied","Data":"740c53299b30df6a094dca8b4457e862311172f260474801e83e635d2a8a2502"} Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.874494 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-95vnq" event={"ID":"42581858-ead1-4898-9fad-72411bf3c6a4","Type":"ContainerStarted","Data":"4f43893ac45e4922957242a9db4708b2364bce5aae74e0b958012018e45639fe"} Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.874529 4758 scope.go:117] "RemoveContainer" containerID="ba5c3e6a8e07b0dd57a9e4b2dce16091167ff8d6309149424a387d5848c250d7" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.019670 4758 scope.go:117] "RemoveContainer" containerID="aa56c2aceef9856376d1db7b6df91462c755abb71029db070b163f5d024efaff" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.061350 4758 scope.go:117] "RemoveContainer" containerID="98a7d35980c739006609bb2df4d25cb95044049a6931e6f8f91cb240bd06c8a4" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.070626 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.092274 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.102932 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:56 crc kubenswrapper[4758]: E0130 08:52:56.103501 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="ceilometer-notification-agent" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.103521 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="ceilometer-notification-agent" Jan 30 08:52:56 crc kubenswrapper[4758]: E0130 08:52:56.103539 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="ceilometer-central-agent" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.103547 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="ceilometer-central-agent" Jan 30 08:52:56 crc kubenswrapper[4758]: E0130 08:52:56.103575 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="proxy-httpd" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.103585 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="proxy-httpd" Jan 30 08:52:56 crc kubenswrapper[4758]: E0130 08:52:56.103652 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="sg-core" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.103663 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="sg-core" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.103904 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="ceilometer-notification-agent" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.103927 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="sg-core" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.103937 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="ceilometer-central-agent" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.103963 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="proxy-httpd" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.111963 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.115272 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.120740 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.121020 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.121188 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.135255 4758 scope.go:117] "RemoveContainer" containerID="4a1277446a2960bf740c100d86ca47dbf20eb3e4bde61a0d988583f9b578d9e7" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.215030 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.215595 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-config-data\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.215719 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-scripts\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.215834 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-run-httpd\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.215951 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.216255 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.216415 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-log-httpd\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.216562 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhrnb\" (UniqueName: \"kubernetes.io/projected/a23235c6-1a6b-42cb-a434-08b7e3555915-kube-api-access-hhrnb\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.319245 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.319297 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-log-httpd\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.319332 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhrnb\" (UniqueName: \"kubernetes.io/projected/a23235c6-1a6b-42cb-a434-08b7e3555915-kube-api-access-hhrnb\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.319390 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.319455 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-config-data\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.319478 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-scripts\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.319492 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-run-httpd\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.319520 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.324371 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.325290 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-log-httpd\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.325918 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.326195 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-run-httpd\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.328112 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.337317 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-scripts\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.343366 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-config-data\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.348872 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhrnb\" (UniqueName: \"kubernetes.io/projected/a23235c6-1a6b-42cb-a434-08b7e3555915-kube-api-access-hhrnb\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.563038 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.890359 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-95vnq" event={"ID":"42581858-ead1-4898-9fad-72411bf3c6a4","Type":"ContainerStarted","Data":"83928b37f6975882ba286ca1fad93e1bc51d5667bdaabba8b964bdb9f2dfdcd4"} Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.964828 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-95vnq" podStartSLOduration=2.964807751 podStartE2EDuration="2.964807751s" podCreationTimestamp="2026-01-30 08:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:52:56.956564753 +0000 UTC m=+1381.928876324" watchObservedRunningTime="2026-01-30 08:52:56.964807751 +0000 UTC m=+1381.937119302" Jan 30 08:52:57 crc kubenswrapper[4758]: I0130 08:52:57.550253 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:57 crc kubenswrapper[4758]: I0130 08:52:57.788331 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" path="/var/lib/kubelet/pods/e3078c00-f908-4759-86c3-ce6109c669c9/volumes" Jan 30 08:52:57 crc kubenswrapper[4758]: I0130 08:52:57.933025 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" event={"ID":"88205fec-5592-41f1-a351-daf34b97add7","Type":"ContainerStarted","Data":"dce4b9b027663af6d3ed749e75b2800742d3649e7fa7c5291abccf826e64b995"} Jan 30 08:52:57 crc kubenswrapper[4758]: I0130 08:52:57.933188 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:57 crc kubenswrapper[4758]: I0130 08:52:57.969678 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" podStartSLOduration=4.969640386 podStartE2EDuration="4.969640386s" podCreationTimestamp="2026-01-30 08:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:52:57.955731951 +0000 UTC m=+1382.928043502" watchObservedRunningTime="2026-01-30 08:52:57.969640386 +0000 UTC m=+1382.941951947" Jan 30 08:52:58 crc kubenswrapper[4758]: I0130 08:52:58.069095 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:52:58 crc kubenswrapper[4758]: I0130 08:52:58.093014 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:52:58 crc kubenswrapper[4758]: I0130 08:52:58.992197 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 08:53:00 crc kubenswrapper[4758]: W0130 08:53:00.307249 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda23235c6_1a6b_42cb_a434_08b7e3555915.slice/crio-b5ca816011f4f5a4791f16842e9d93e8577093c69e06c21a6dbcef6e6e551521 WatchSource:0}: Error finding container b5ca816011f4f5a4791f16842e9d93e8577093c69e06c21a6dbcef6e6e551521: Status 404 returned error can't find the container with id b5ca816011f4f5a4791f16842e9d93e8577093c69e06c21a6dbcef6e6e551521 Jan 30 08:53:00 crc kubenswrapper[4758]: I0130 08:53:00.780683 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:53:00 crc kubenswrapper[4758]: I0130 08:53:00.852486 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76fc974bd8-4mnvj"] Jan 30 08:53:00 crc kubenswrapper[4758]: I0130 08:53:00.852817 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon-log" containerID="cri-o://928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032" gracePeriod=30 Jan 30 08:53:00 crc kubenswrapper[4758]: I0130 08:53:00.852984 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" containerID="cri-o://f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b" gracePeriod=30 Jan 30 08:53:00 crc kubenswrapper[4758]: E0130 08:53:00.870992 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-proxy-75f5775999-fhl5h" podUID="c2358e5c-db98-4b7b-8b6c-2e83132655a9" Jan 30 08:53:00 crc kubenswrapper[4758]: I0130 08:53:00.989973 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:53:00 crc kubenswrapper[4758]: I0130 08:53:00.989990 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerStarted","Data":"b5ca816011f4f5a4791f16842e9d93e8577093c69e06c21a6dbcef6e6e551521"} Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.481453 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zpwcm"] Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.484172 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.512249 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zpwcm"] Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.587402 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-catalog-content\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.587503 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-utilities\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.587547 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn6cl\" (UniqueName: \"kubernetes.io/projected/c423a7ce-2dda-478f-8579-641954eee4a9-kube-api-access-cn6cl\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.688737 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-utilities\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.688819 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn6cl\" (UniqueName: \"kubernetes.io/projected/c423a7ce-2dda-478f-8579-641954eee4a9-kube-api-access-cn6cl\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.688936 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-catalog-content\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.689349 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-catalog-content\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.689396 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-utilities\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.710075 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn6cl\" (UniqueName: \"kubernetes.io/projected/c423a7ce-2dda-478f-8579-641954eee4a9-kube-api-access-cn6cl\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.846395 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:03 crc kubenswrapper[4758]: I0130 08:53:03.623382 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:53:03 crc kubenswrapper[4758]: I0130 08:53:03.693502 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-frscq"] Jan 30 08:53:03 crc kubenswrapper[4758]: I0130 08:53:03.693954 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" podUID="7f24b01a-1d08-4fcc-9bbc-591644e40964" containerName="dnsmasq-dns" containerID="cri-o://516401664f75cbce8e0c6bf0b65e1672368441d65029374151387707c2397642" gracePeriod=10 Jan 30 08:53:04 crc kubenswrapper[4758]: I0130 08:53:04.022099 4758 generic.go:334] "Generic (PLEG): container finished" podID="7f24b01a-1d08-4fcc-9bbc-591644e40964" containerID="516401664f75cbce8e0c6bf0b65e1672368441d65029374151387707c2397642" exitCode=0 Jan 30 08:53:04 crc kubenswrapper[4758]: I0130 08:53:04.022381 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" event={"ID":"7f24b01a-1d08-4fcc-9bbc-591644e40964","Type":"ContainerDied","Data":"516401664f75cbce8e0c6bf0b65e1672368441d65029374151387707c2397642"} Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.070507 4758 generic.go:334] "Generic (PLEG): container finished" podID="365b123c-aa7f-464d-b659-78154f86d42f" containerID="f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b" exitCode=0 Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.070778 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerDied","Data":"f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b"} Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.070814 4758 scope.go:117] "RemoveContainer" containerID="e8e2a20c9ea4e39eefa6de1113690b21340a717486562346be315bf42b27f2b0" Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.535364 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.657772 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-sb\") pod \"7f24b01a-1d08-4fcc-9bbc-591644e40964\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.657903 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-dns-svc\") pod \"7f24b01a-1d08-4fcc-9bbc-591644e40964\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.657967 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-nb\") pod \"7f24b01a-1d08-4fcc-9bbc-591644e40964\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.658127 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-config\") pod \"7f24b01a-1d08-4fcc-9bbc-591644e40964\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.658553 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c65px\" (UniqueName: \"kubernetes.io/projected/7f24b01a-1d08-4fcc-9bbc-591644e40964-kube-api-access-c65px\") pod \"7f24b01a-1d08-4fcc-9bbc-591644e40964\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.714309 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f24b01a-1d08-4fcc-9bbc-591644e40964-kube-api-access-c65px" (OuterVolumeSpecName: "kube-api-access-c65px") pod "7f24b01a-1d08-4fcc-9bbc-591644e40964" (UID: "7f24b01a-1d08-4fcc-9bbc-591644e40964"). InnerVolumeSpecName "kube-api-access-c65px". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.763545 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c65px\" (UniqueName: \"kubernetes.io/projected/7f24b01a-1d08-4fcc-9bbc-591644e40964-kube-api-access-c65px\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.764911 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zpwcm"] Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.866831 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:53:05 crc kubenswrapper[4758]: E0130 08:53:05.868887 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:53:05 crc kubenswrapper[4758]: E0130 08:53:05.868915 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:53:05 crc kubenswrapper[4758]: E0130 08:53:05.868971 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:55:07.868946679 +0000 UTC m=+1512.841258230 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.010944 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.043571 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f24b01a-1d08-4fcc-9bbc-591644e40964" (UID: "7f24b01a-1d08-4fcc-9bbc-591644e40964"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.071937 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.110180 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="0b4a1798-01a8-4e2f-8c93-ee4053777f75" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec" gracePeriod=30 Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.137641 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.148305 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.18498107 podStartE2EDuration="14.148284118s" podCreationTimestamp="2026-01-30 08:52:52 +0000 UTC" firstStartedPulling="2026-01-30 08:52:53.955673552 +0000 UTC m=+1378.927985103" lastFinishedPulling="2026-01-30 08:53:04.9189766 +0000 UTC m=+1389.891288151" observedRunningTime="2026-01-30 08:53:06.144642994 +0000 UTC m=+1391.116954555" watchObservedRunningTime="2026-01-30 08:53:06.148284118 +0000 UTC m=+1391.120595669" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.155896 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f24b01a-1d08-4fcc-9bbc-591644e40964" (UID: "7f24b01a-1d08-4fcc-9bbc-591644e40964"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.174088 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.179415 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.845990436 podStartE2EDuration="15.179388291s" podCreationTimestamp="2026-01-30 08:52:51 +0000 UTC" firstStartedPulling="2026-01-30 08:52:53.513813879 +0000 UTC m=+1378.486125430" lastFinishedPulling="2026-01-30 08:53:04.847211734 +0000 UTC m=+1389.819523285" observedRunningTime="2026-01-30 08:53:06.177799351 +0000 UTC m=+1391.150110912" watchObservedRunningTime="2026-01-30 08:53:06.179388291 +0000 UTC m=+1391.151699842" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.218393 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-config" (OuterVolumeSpecName: "config") pod "7f24b01a-1d08-4fcc-9bbc-591644e40964" (UID: "7f24b01a-1d08-4fcc-9bbc-591644e40964"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.253671 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f24b01a-1d08-4fcc-9bbc-591644e40964" (UID: "7f24b01a-1d08-4fcc-9bbc-591644e40964"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.276794 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.277072 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.392692 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0b4a1798-01a8-4e2f-8c93-ee4053777f75","Type":"ContainerStarted","Data":"1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec"} Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.393014 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7fbb8129-e72f-4925-a38a-06505fe53fb3","Type":"ContainerStarted","Data":"63a7009910a7bfc689262aa96451a1f94225bfd5fa271b9056b1473253b69674"} Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.393239 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01bcaf88-537a-47cd-b50d-6c95e392f2a8","Type":"ContainerStarted","Data":"508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0"} Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.393345 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwcm" event={"ID":"c423a7ce-2dda-478f-8579-641954eee4a9","Type":"ContainerStarted","Data":"126ded88ff78aed69c62e79ace5301ae74868b1dd3131b082272b1efab898826"} Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.393450 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" event={"ID":"7f24b01a-1d08-4fcc-9bbc-591644e40964","Type":"ContainerDied","Data":"6d3908fc7a3657c2e0650ef09a082b68d8f4744d5e8a7add60e40d22f8b4f455"} Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.393547 4758 scope.go:117] "RemoveContainer" containerID="516401664f75cbce8e0c6bf0b65e1672368441d65029374151387707c2397642" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.475588 4758 scope.go:117] "RemoveContainer" containerID="0fad6fa55bd6734af8e520f919028effe9343fb8ce554bc0f6cdb0c2e748ee42" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.529653 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-frscq"] Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.569016 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-frscq"] Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.163621 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwcm" event={"ID":"c423a7ce-2dda-478f-8579-641954eee4a9","Type":"ContainerStarted","Data":"bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307"} Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.178754 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a20097fd-7119-487f-8a17-6b1f7f0f5bc7","Type":"ContainerStarted","Data":"462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7"} Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.178824 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a20097fd-7119-487f-8a17-6b1f7f0f5bc7","Type":"ContainerStarted","Data":"f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303"} Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.196293 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerStarted","Data":"7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9"} Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.213458 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerName="nova-metadata-log" containerID="cri-o://508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0" gracePeriod=30 Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.213542 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01bcaf88-537a-47cd-b50d-6c95e392f2a8","Type":"ContainerStarted","Data":"20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa"} Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.213609 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerName="nova-metadata-metadata" containerID="cri-o://20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa" gracePeriod=30 Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.238524 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.437838835 podStartE2EDuration="16.238500984s" podCreationTimestamp="2026-01-30 08:52:51 +0000 UTC" firstStartedPulling="2026-01-30 08:52:54.200560783 +0000 UTC m=+1379.172872334" lastFinishedPulling="2026-01-30 08:53:05.001222922 +0000 UTC m=+1389.973534483" observedRunningTime="2026-01-30 08:53:07.233178367 +0000 UTC m=+1392.205489938" watchObservedRunningTime="2026-01-30 08:53:07.238500984 +0000 UTC m=+1392.210812535" Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.247013 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.267296 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.5587698880000005 podStartE2EDuration="15.267264354s" podCreationTimestamp="2026-01-30 08:52:52 +0000 UTC" firstStartedPulling="2026-01-30 08:52:54.243124335 +0000 UTC m=+1379.215435886" lastFinishedPulling="2026-01-30 08:53:04.951618801 +0000 UTC m=+1389.923930352" observedRunningTime="2026-01-30 08:53:07.261103621 +0000 UTC m=+1392.233415172" watchObservedRunningTime="2026-01-30 08:53:07.267264354 +0000 UTC m=+1392.239575915" Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.747671 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.784212 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f24b01a-1d08-4fcc-9bbc-591644e40964" path="/var/lib/kubelet/pods/7f24b01a-1d08-4fcc-9bbc-591644e40964/volumes" Jan 30 08:53:08 crc kubenswrapper[4758]: I0130 08:53:08.230221 4758 generic.go:334] "Generic (PLEG): container finished" podID="c423a7ce-2dda-478f-8579-641954eee4a9" containerID="bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307" exitCode=0 Jan 30 08:53:08 crc kubenswrapper[4758]: I0130 08:53:08.230462 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwcm" event={"ID":"c423a7ce-2dda-478f-8579-641954eee4a9","Type":"ContainerDied","Data":"bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307"} Jan 30 08:53:08 crc kubenswrapper[4758]: I0130 08:53:08.239664 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerStarted","Data":"3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9"} Jan 30 08:53:08 crc kubenswrapper[4758]: I0130 08:53:08.244006 4758 generic.go:334] "Generic (PLEG): container finished" podID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerID="508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0" exitCode=143 Jan 30 08:53:08 crc kubenswrapper[4758]: I0130 08:53:08.245200 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01bcaf88-537a-47cd-b50d-6c95e392f2a8","Type":"ContainerDied","Data":"508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0"} Jan 30 08:53:08 crc kubenswrapper[4758]: I0130 08:53:08.246691 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 08:53:08 crc kubenswrapper[4758]: I0130 08:53:08.246739 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 08:53:09 crc kubenswrapper[4758]: I0130 08:53:09.256378 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerStarted","Data":"7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c"} Jan 30 08:53:09 crc kubenswrapper[4758]: I0130 08:53:09.261570 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwcm" event={"ID":"c423a7ce-2dda-478f-8579-641954eee4a9","Type":"ContainerStarted","Data":"c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f"} Jan 30 08:53:11 crc kubenswrapper[4758]: I0130 08:53:11.281890 4758 generic.go:334] "Generic (PLEG): container finished" podID="9c7907da-98f4-46c2-9089-7227516cf739" containerID="87c0147985c0330380f0c62d6ffa802a17a1d7f52af945bd4b80ccd53693d21e" exitCode=0 Jan 30 08:53:11 crc kubenswrapper[4758]: I0130 08:53:11.283811 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xp8d6" event={"ID":"9c7907da-98f4-46c2-9089-7227516cf739","Type":"ContainerDied","Data":"87c0147985c0330380f0c62d6ffa802a17a1d7f52af945bd4b80ccd53693d21e"} Jan 30 08:53:12 crc kubenswrapper[4758]: I0130 08:53:12.247462 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 08:53:12 crc kubenswrapper[4758]: I0130 08:53:12.281585 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 08:53:12 crc kubenswrapper[4758]: I0130 08:53:12.333199 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.036138 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.163354 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-scripts\") pod \"9c7907da-98f4-46c2-9089-7227516cf739\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.163557 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfl8m\" (UniqueName: \"kubernetes.io/projected/9c7907da-98f4-46c2-9089-7227516cf739-kube-api-access-sfl8m\") pod \"9c7907da-98f4-46c2-9089-7227516cf739\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.163643 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-config-data\") pod \"9c7907da-98f4-46c2-9089-7227516cf739\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.163705 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-combined-ca-bundle\") pod \"9c7907da-98f4-46c2-9089-7227516cf739\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.173607 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-scripts" (OuterVolumeSpecName: "scripts") pod "9c7907da-98f4-46c2-9089-7227516cf739" (UID: "9c7907da-98f4-46c2-9089-7227516cf739"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.187846 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.188350 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.201106 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c7907da-98f4-46c2-9089-7227516cf739-kube-api-access-sfl8m" (OuterVolumeSpecName: "kube-api-access-sfl8m") pod "9c7907da-98f4-46c2-9089-7227516cf739" (UID: "9c7907da-98f4-46c2-9089-7227516cf739"). InnerVolumeSpecName "kube-api-access-sfl8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.211433 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c7907da-98f4-46c2-9089-7227516cf739" (UID: "9c7907da-98f4-46c2-9089-7227516cf739"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.213480 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-config-data" (OuterVolumeSpecName: "config-data") pod "9c7907da-98f4-46c2-9089-7227516cf739" (UID: "9c7907da-98f4-46c2-9089-7227516cf739"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.267245 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfl8m\" (UniqueName: \"kubernetes.io/projected/9c7907da-98f4-46c2-9089-7227516cf739-kube-api-access-sfl8m\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.267522 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.267667 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.267757 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.310926 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xp8d6" event={"ID":"9c7907da-98f4-46c2-9089-7227516cf739","Type":"ContainerDied","Data":"9662113a5e6c82a0672966fefbaee39c41d94a66b3bff675c82499c15dc32a24"} Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.310984 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9662113a5e6c82a0672966fefbaee39c41d94a66b3bff675c82499c15dc32a24" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.310909 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.456616 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.479838 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:53:14 crc kubenswrapper[4758]: I0130 08:53:14.272303 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:53:14 crc kubenswrapper[4758]: I0130 08:53:14.272453 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:53:14 crc kubenswrapper[4758]: I0130 08:53:14.321419 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerStarted","Data":"7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6"} Jan 30 08:53:14 crc kubenswrapper[4758]: I0130 08:53:14.321787 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7fbb8129-e72f-4925-a38a-06505fe53fb3" containerName="nova-scheduler-scheduler" containerID="cri-o://63a7009910a7bfc689262aa96451a1f94225bfd5fa271b9056b1473253b69674" gracePeriod=30 Jan 30 08:53:14 crc kubenswrapper[4758]: I0130 08:53:14.321988 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-log" containerID="cri-o://f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303" gracePeriod=30 Jan 30 08:53:14 crc kubenswrapper[4758]: I0130 08:53:14.322064 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-api" containerID="cri-o://462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7" gracePeriod=30 Jan 30 08:53:14 crc kubenswrapper[4758]: I0130 08:53:14.366756 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.049155939 podStartE2EDuration="18.366730684s" podCreationTimestamp="2026-01-30 08:52:56 +0000 UTC" firstStartedPulling="2026-01-30 08:53:01.691504771 +0000 UTC m=+1386.663816322" lastFinishedPulling="2026-01-30 08:53:14.009079516 +0000 UTC m=+1398.981391067" observedRunningTime="2026-01-30 08:53:14.354620985 +0000 UTC m=+1399.326932526" watchObservedRunningTime="2026-01-30 08:53:14.366730684 +0000 UTC m=+1399.339042235" Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.346851 4758 generic.go:334] "Generic (PLEG): container finished" podID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerID="f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303" exitCode=143 Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.346915 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a20097fd-7119-487f-8a17-6b1f7f0f5bc7","Type":"ContainerDied","Data":"f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303"} Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.361765 4758 generic.go:334] "Generic (PLEG): container finished" podID="7fbb8129-e72f-4925-a38a-06505fe53fb3" containerID="63a7009910a7bfc689262aa96451a1f94225bfd5fa271b9056b1473253b69674" exitCode=0 Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.361841 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7fbb8129-e72f-4925-a38a-06505fe53fb3","Type":"ContainerDied","Data":"63a7009910a7bfc689262aa96451a1f94225bfd5fa271b9056b1473253b69674"} Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.387364 4758 generic.go:334] "Generic (PLEG): container finished" podID="c423a7ce-2dda-478f-8579-641954eee4a9" containerID="c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f" exitCode=0 Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.388413 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwcm" event={"ID":"c423a7ce-2dda-478f-8579-641954eee4a9","Type":"ContainerDied","Data":"c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f"} Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.388468 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.692965 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.820702 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dglqr\" (UniqueName: \"kubernetes.io/projected/7fbb8129-e72f-4925-a38a-06505fe53fb3-kube-api-access-dglqr\") pod \"7fbb8129-e72f-4925-a38a-06505fe53fb3\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.820919 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-combined-ca-bundle\") pod \"7fbb8129-e72f-4925-a38a-06505fe53fb3\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.820970 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-config-data\") pod \"7fbb8129-e72f-4925-a38a-06505fe53fb3\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.831325 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fbb8129-e72f-4925-a38a-06505fe53fb3-kube-api-access-dglqr" (OuterVolumeSpecName: "kube-api-access-dglqr") pod "7fbb8129-e72f-4925-a38a-06505fe53fb3" (UID: "7fbb8129-e72f-4925-a38a-06505fe53fb3"). InnerVolumeSpecName "kube-api-access-dglqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.865940 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-config-data" (OuterVolumeSpecName: "config-data") pod "7fbb8129-e72f-4925-a38a-06505fe53fb3" (UID: "7fbb8129-e72f-4925-a38a-06505fe53fb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.894366 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fbb8129-e72f-4925-a38a-06505fe53fb3" (UID: "7fbb8129-e72f-4925-a38a-06505fe53fb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.924754 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dglqr\" (UniqueName: \"kubernetes.io/projected/7fbb8129-e72f-4925-a38a-06505fe53fb3-kube-api-access-dglqr\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.924797 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.924805 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.010687 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.399225 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwcm" event={"ID":"c423a7ce-2dda-478f-8579-641954eee4a9","Type":"ContainerStarted","Data":"f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0"} Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.402549 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.402589 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7fbb8129-e72f-4925-a38a-06505fe53fb3","Type":"ContainerDied","Data":"f59aee87045b6560ae771d39cafdf29ecf7529163d23a6f08629ad7171493e30"} Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.402625 4758 scope.go:117] "RemoveContainer" containerID="63a7009910a7bfc689262aa96451a1f94225bfd5fa271b9056b1473253b69674" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.429303 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zpwcm" podStartSLOduration=5.757055066 podStartE2EDuration="14.429278449s" podCreationTimestamp="2026-01-30 08:53:02 +0000 UTC" firstStartedPulling="2026-01-30 08:53:07.166666427 +0000 UTC m=+1392.138977978" lastFinishedPulling="2026-01-30 08:53:15.83888981 +0000 UTC m=+1400.811201361" observedRunningTime="2026-01-30 08:53:16.426034639 +0000 UTC m=+1401.398346220" watchObservedRunningTime="2026-01-30 08:53:16.429278449 +0000 UTC m=+1401.401590011" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.479100 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.494922 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.516770 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:53:16 crc kubenswrapper[4758]: E0130 08:53:16.517229 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f24b01a-1d08-4fcc-9bbc-591644e40964" containerName="dnsmasq-dns" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.517251 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f24b01a-1d08-4fcc-9bbc-591644e40964" containerName="dnsmasq-dns" Jan 30 08:53:16 crc kubenswrapper[4758]: E0130 08:53:16.517274 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fbb8129-e72f-4925-a38a-06505fe53fb3" containerName="nova-scheduler-scheduler" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.517292 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fbb8129-e72f-4925-a38a-06505fe53fb3" containerName="nova-scheduler-scheduler" Jan 30 08:53:16 crc kubenswrapper[4758]: E0130 08:53:16.517299 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f24b01a-1d08-4fcc-9bbc-591644e40964" containerName="init" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.517305 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f24b01a-1d08-4fcc-9bbc-591644e40964" containerName="init" Jan 30 08:53:16 crc kubenswrapper[4758]: E0130 08:53:16.517323 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c7907da-98f4-46c2-9089-7227516cf739" containerName="nova-manage" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.517330 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c7907da-98f4-46c2-9089-7227516cf739" containerName="nova-manage" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.517513 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f24b01a-1d08-4fcc-9bbc-591644e40964" containerName="dnsmasq-dns" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.517531 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c7907da-98f4-46c2-9089-7227516cf739" containerName="nova-manage" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.517546 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fbb8129-e72f-4925-a38a-06505fe53fb3" containerName="nova-scheduler-scheduler" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.518212 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.525016 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.589603 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.653438 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.653506 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-config-data\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.653575 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq64w\" (UniqueName: \"kubernetes.io/projected/015d0689-8130-4f19-bb79-866766c02c63-kube-api-access-jq64w\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.755724 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.755795 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-config-data\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.755861 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq64w\" (UniqueName: \"kubernetes.io/projected/015d0689-8130-4f19-bb79-866766c02c63-kube-api-access-jq64w\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.765224 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.773722 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-config-data\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.779809 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq64w\" (UniqueName: \"kubernetes.io/projected/015d0689-8130-4f19-bb79-866766c02c63-kube-api-access-jq64w\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.891080 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:53:17 crc kubenswrapper[4758]: I0130 08:53:17.559276 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:53:17 crc kubenswrapper[4758]: I0130 08:53:17.802247 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fbb8129-e72f-4925-a38a-06505fe53fb3" path="/var/lib/kubelet/pods/7fbb8129-e72f-4925-a38a-06505fe53fb3/volumes" Jan 30 08:53:18 crc kubenswrapper[4758]: I0130 08:53:18.428622 4758 generic.go:334] "Generic (PLEG): container finished" podID="42581858-ead1-4898-9fad-72411bf3c6a4" containerID="83928b37f6975882ba286ca1fad93e1bc51d5667bdaabba8b964bdb9f2dfdcd4" exitCode=0 Jan 30 08:53:18 crc kubenswrapper[4758]: I0130 08:53:18.428960 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-95vnq" event={"ID":"42581858-ead1-4898-9fad-72411bf3c6a4","Type":"ContainerDied","Data":"83928b37f6975882ba286ca1fad93e1bc51d5667bdaabba8b964bdb9f2dfdcd4"} Jan 30 08:53:18 crc kubenswrapper[4758]: I0130 08:53:18.431082 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"015d0689-8130-4f19-bb79-866766c02c63","Type":"ContainerStarted","Data":"d070fa2fc47120d90dd29347010dd2bd317f21d5673f4354507acac0096009be"} Jan 30 08:53:18 crc kubenswrapper[4758]: I0130 08:53:18.431125 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"015d0689-8130-4f19-bb79-866766c02c63","Type":"ContainerStarted","Data":"7df77a789ad68a7f771f50962b45656f81dbca922fc56cf93ed7fd78a2640d9a"} Jan 30 08:53:18 crc kubenswrapper[4758]: I0130 08:53:18.522788 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.522765724 podStartE2EDuration="2.522765724s" podCreationTimestamp="2026-01-30 08:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:53:18.516968282 +0000 UTC m=+1403.489279863" watchObservedRunningTime="2026-01-30 08:53:18.522765724 +0000 UTC m=+1403.495077275" Jan 30 08:53:19 crc kubenswrapper[4758]: I0130 08:53:19.836773 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:53:19 crc kubenswrapper[4758]: I0130 08:53:19.959153 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-combined-ca-bundle\") pod \"42581858-ead1-4898-9fad-72411bf3c6a4\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " Jan 30 08:53:19 crc kubenswrapper[4758]: I0130 08:53:19.959454 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq455\" (UniqueName: \"kubernetes.io/projected/42581858-ead1-4898-9fad-72411bf3c6a4-kube-api-access-kq455\") pod \"42581858-ead1-4898-9fad-72411bf3c6a4\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " Jan 30 08:53:19 crc kubenswrapper[4758]: I0130 08:53:19.959574 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-config-data\") pod \"42581858-ead1-4898-9fad-72411bf3c6a4\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " Jan 30 08:53:19 crc kubenswrapper[4758]: I0130 08:53:19.959755 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-scripts\") pod \"42581858-ead1-4898-9fad-72411bf3c6a4\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " Jan 30 08:53:19 crc kubenswrapper[4758]: I0130 08:53:19.982371 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-scripts" (OuterVolumeSpecName: "scripts") pod "42581858-ead1-4898-9fad-72411bf3c6a4" (UID: "42581858-ead1-4898-9fad-72411bf3c6a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:19 crc kubenswrapper[4758]: I0130 08:53:19.982541 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42581858-ead1-4898-9fad-72411bf3c6a4-kube-api-access-kq455" (OuterVolumeSpecName: "kube-api-access-kq455") pod "42581858-ead1-4898-9fad-72411bf3c6a4" (UID: "42581858-ead1-4898-9fad-72411bf3c6a4"). InnerVolumeSpecName "kube-api-access-kq455". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:19 crc kubenswrapper[4758]: I0130 08:53:19.995131 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42581858-ead1-4898-9fad-72411bf3c6a4" (UID: "42581858-ead1-4898-9fad-72411bf3c6a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.006660 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-config-data" (OuterVolumeSpecName: "config-data") pod "42581858-ead1-4898-9fad-72411bf3c6a4" (UID: "42581858-ead1-4898-9fad-72411bf3c6a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.062409 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.062471 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.062484 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq455\" (UniqueName: \"kubernetes.io/projected/42581858-ead1-4898-9fad-72411bf3c6a4-kube-api-access-kq455\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.062493 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.451578 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-95vnq" event={"ID":"42581858-ead1-4898-9fad-72411bf3c6a4","Type":"ContainerDied","Data":"4f43893ac45e4922957242a9db4708b2364bce5aae74e0b958012018e45639fe"} Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.451613 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.451631 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f43893ac45e4922957242a9db4708b2364bce5aae74e0b958012018e45639fe" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.618439 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 08:53:20 crc kubenswrapper[4758]: E0130 08:53:20.618961 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42581858-ead1-4898-9fad-72411bf3c6a4" containerName="nova-cell1-conductor-db-sync" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.618984 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="42581858-ead1-4898-9fad-72411bf3c6a4" containerName="nova-cell1-conductor-db-sync" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.619247 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="42581858-ead1-4898-9fad-72411bf3c6a4" containerName="nova-cell1-conductor-db-sync" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.620086 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.626677 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.637300 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.775743 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9de33ead-37a1-4675-bad7-79b672a0c954-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.775846 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgrgm\" (UniqueName: \"kubernetes.io/projected/9de33ead-37a1-4675-bad7-79b672a0c954-kube-api-access-bgrgm\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.775923 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9de33ead-37a1-4675-bad7-79b672a0c954-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.877911 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9de33ead-37a1-4675-bad7-79b672a0c954-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.878014 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgrgm\" (UniqueName: \"kubernetes.io/projected/9de33ead-37a1-4675-bad7-79b672a0c954-kube-api-access-bgrgm\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.878141 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9de33ead-37a1-4675-bad7-79b672a0c954-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.884734 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9de33ead-37a1-4675-bad7-79b672a0c954-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.885749 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9de33ead-37a1-4675-bad7-79b672a0c954-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.901312 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgrgm\" (UniqueName: \"kubernetes.io/projected/9de33ead-37a1-4675-bad7-79b672a0c954-kube-api-access-bgrgm\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.946880 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:21 crc kubenswrapper[4758]: I0130 08:53:21.492011 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 08:53:21 crc kubenswrapper[4758]: I0130 08:53:21.892019 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 08:53:22 crc kubenswrapper[4758]: I0130 08:53:22.485385 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9de33ead-37a1-4675-bad7-79b672a0c954","Type":"ContainerStarted","Data":"25dbab7f3a5fe899a55b3f80388fbfc8b188c0fc73d038179c36d02035a2c61b"} Jan 30 08:53:22 crc kubenswrapper[4758]: I0130 08:53:22.485428 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9de33ead-37a1-4675-bad7-79b672a0c954","Type":"ContainerStarted","Data":"b66c18c03a83a4ec982c4df58490bed0ca3bdd9ce851d858c92431546587f12e"} Jan 30 08:53:22 crc kubenswrapper[4758]: I0130 08:53:22.485522 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:22 crc kubenswrapper[4758]: I0130 08:53:22.508315 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.508290617 podStartE2EDuration="2.508290617s" podCreationTimestamp="2026-01-30 08:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:53:22.504401055 +0000 UTC m=+1407.476712596" watchObservedRunningTime="2026-01-30 08:53:22.508290617 +0000 UTC m=+1407.480602168" Jan 30 08:53:22 crc kubenswrapper[4758]: I0130 08:53:22.847393 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:22 crc kubenswrapper[4758]: I0130 08:53:22.848676 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.186939 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.187337 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.253400 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.335952 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-logs\") pod \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.336070 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-config-data\") pod \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.336245 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-combined-ca-bundle\") pod \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.336328 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwnt5\" (UniqueName: \"kubernetes.io/projected/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-kube-api-access-rwnt5\") pod \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.336874 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-logs" (OuterVolumeSpecName: "logs") pod "a20097fd-7119-487f-8a17-6b1f7f0f5bc7" (UID: "a20097fd-7119-487f-8a17-6b1f7f0f5bc7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.364314 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-kube-api-access-rwnt5" (OuterVolumeSpecName: "kube-api-access-rwnt5") pod "a20097fd-7119-487f-8a17-6b1f7f0f5bc7" (UID: "a20097fd-7119-487f-8a17-6b1f7f0f5bc7"). InnerVolumeSpecName "kube-api-access-rwnt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.369844 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-config-data" (OuterVolumeSpecName: "config-data") pod "a20097fd-7119-487f-8a17-6b1f7f0f5bc7" (UID: "a20097fd-7119-487f-8a17-6b1f7f0f5bc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.369928 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a20097fd-7119-487f-8a17-6b1f7f0f5bc7" (UID: "a20097fd-7119-487f-8a17-6b1f7f0f5bc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.438434 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.438691 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.438760 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.438822 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwnt5\" (UniqueName: \"kubernetes.io/projected/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-kube-api-access-rwnt5\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.499512 4758 generic.go:334] "Generic (PLEG): container finished" podID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerID="462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7" exitCode=0 Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.500221 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a20097fd-7119-487f-8a17-6b1f7f0f5bc7","Type":"ContainerDied","Data":"462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7"} Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.500297 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a20097fd-7119-487f-8a17-6b1f7f0f5bc7","Type":"ContainerDied","Data":"0554dd0e1bf67b74e082f84d57d630b9e35218363af5e1129b9eccdc1687b263"} Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.500306 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.500325 4758 scope.go:117] "RemoveContainer" containerID="462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.546258 4758 scope.go:117] "RemoveContainer" containerID="f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.551601 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.575358 4758 scope.go:117] "RemoveContainer" containerID="462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7" Jan 30 08:53:23 crc kubenswrapper[4758]: E0130 08:53:23.577302 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7\": container with ID starting with 462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7 not found: ID does not exist" containerID="462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.577378 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7"} err="failed to get container status \"462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7\": rpc error: code = NotFound desc = could not find container \"462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7\": container with ID starting with 462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7 not found: ID does not exist" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.577415 4758 scope.go:117] "RemoveContainer" containerID="f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303" Jan 30 08:53:23 crc kubenswrapper[4758]: E0130 08:53:23.579097 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303\": container with ID starting with f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303 not found: ID does not exist" containerID="f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.579171 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303"} err="failed to get container status \"f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303\": rpc error: code = NotFound desc = could not find container \"f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303\": container with ID starting with f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303 not found: ID does not exist" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.584542 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.600881 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:23 crc kubenswrapper[4758]: E0130 08:53:23.601432 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-api" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.601454 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-api" Jan 30 08:53:23 crc kubenswrapper[4758]: E0130 08:53:23.601497 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-log" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.601504 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-log" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.601715 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-log" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.601754 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-api" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.602991 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.606655 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.623812 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.751944 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.752031 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzkbm\" (UniqueName: \"kubernetes.io/projected/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-kube-api-access-mzkbm\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.752253 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-logs\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.752304 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-config-data\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.791337 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" path="/var/lib/kubelet/pods/a20097fd-7119-487f-8a17-6b1f7f0f5bc7/volumes" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.854453 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-logs\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.855001 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-config-data\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.855174 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.855214 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzkbm\" (UniqueName: \"kubernetes.io/projected/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-kube-api-access-mzkbm\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.859900 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-logs\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.862496 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-config-data\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.862838 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.889679 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzkbm\" (UniqueName: \"kubernetes.io/projected/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-kube-api-access-mzkbm\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.897979 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zpwcm" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="registry-server" probeResult="failure" output=< Jan 30 08:53:23 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 08:53:23 crc kubenswrapper[4758]: > Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.925041 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:53:24 crc kubenswrapper[4758]: I0130 08:53:24.383498 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:24 crc kubenswrapper[4758]: I0130 08:53:24.511429 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"498895d5-d6fe-4ea3-ae4b-c610c53b89c3","Type":"ContainerStarted","Data":"73e4290d9c9ecf7bf09bd3149f6a065ae76b3173eaddcd9c2fd875f062240283"} Jan 30 08:53:25 crc kubenswrapper[4758]: I0130 08:53:25.523943 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"498895d5-d6fe-4ea3-ae4b-c610c53b89c3","Type":"ContainerStarted","Data":"f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc"} Jan 30 08:53:25 crc kubenswrapper[4758]: I0130 08:53:25.524333 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"498895d5-d6fe-4ea3-ae4b-c610c53b89c3","Type":"ContainerStarted","Data":"a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de"} Jan 30 08:53:25 crc kubenswrapper[4758]: I0130 08:53:25.547466 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.547441644 podStartE2EDuration="2.547441644s" podCreationTimestamp="2026-01-30 08:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:53:25.540621271 +0000 UTC m=+1410.512932822" watchObservedRunningTime="2026-01-30 08:53:25.547441644 +0000 UTC m=+1410.519753215" Jan 30 08:53:26 crc kubenswrapper[4758]: I0130 08:53:26.010765 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:53:26 crc kubenswrapper[4758]: I0130 08:53:26.011169 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:53:26 crc kubenswrapper[4758]: I0130 08:53:26.590379 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 08:53:26 crc kubenswrapper[4758]: I0130 08:53:26.892311 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 08:53:26 crc kubenswrapper[4758]: I0130 08:53:26.919373 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 08:53:27 crc kubenswrapper[4758]: I0130 08:53:27.573808 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:30.992180 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.341270 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.409477 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-config-data\") pod \"365b123c-aa7f-464d-b659-78154f86d42f\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.409568 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-tls-certs\") pod \"365b123c-aa7f-464d-b659-78154f86d42f\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.409711 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-combined-ca-bundle\") pod \"365b123c-aa7f-464d-b659-78154f86d42f\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.409783 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365b123c-aa7f-464d-b659-78154f86d42f-logs\") pod \"365b123c-aa7f-464d-b659-78154f86d42f\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.409838 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-secret-key\") pod \"365b123c-aa7f-464d-b659-78154f86d42f\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.410476 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/365b123c-aa7f-464d-b659-78154f86d42f-logs" (OuterVolumeSpecName: "logs") pod "365b123c-aa7f-464d-b659-78154f86d42f" (UID: "365b123c-aa7f-464d-b659-78154f86d42f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.409876 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm897\" (UniqueName: \"kubernetes.io/projected/365b123c-aa7f-464d-b659-78154f86d42f-kube-api-access-wm897\") pod \"365b123c-aa7f-464d-b659-78154f86d42f\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.410767 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-scripts\") pod \"365b123c-aa7f-464d-b659-78154f86d42f\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.412539 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365b123c-aa7f-464d-b659-78154f86d42f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.429601 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/365b123c-aa7f-464d-b659-78154f86d42f-kube-api-access-wm897" (OuterVolumeSpecName: "kube-api-access-wm897") pod "365b123c-aa7f-464d-b659-78154f86d42f" (UID: "365b123c-aa7f-464d-b659-78154f86d42f"). InnerVolumeSpecName "kube-api-access-wm897". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.429764 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "365b123c-aa7f-464d-b659-78154f86d42f" (UID: "365b123c-aa7f-464d-b659-78154f86d42f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.459371 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "365b123c-aa7f-464d-b659-78154f86d42f" (UID: "365b123c-aa7f-464d-b659-78154f86d42f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.471335 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-scripts" (OuterVolumeSpecName: "scripts") pod "365b123c-aa7f-464d-b659-78154f86d42f" (UID: "365b123c-aa7f-464d-b659-78154f86d42f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.490115 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "365b123c-aa7f-464d-b659-78154f86d42f" (UID: "365b123c-aa7f-464d-b659-78154f86d42f"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.498254 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-config-data" (OuterVolumeSpecName: "config-data") pod "365b123c-aa7f-464d-b659-78154f86d42f" (UID: "365b123c-aa7f-464d-b659-78154f86d42f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.514696 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.514727 4758 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.514740 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm897\" (UniqueName: \"kubernetes.io/projected/365b123c-aa7f-464d-b659-78154f86d42f-kube-api-access-wm897\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.514754 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.514764 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.514776 4758 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.583912 4758 generic.go:334] "Generic (PLEG): container finished" podID="365b123c-aa7f-464d-b659-78154f86d42f" containerID="928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032" exitCode=137 Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.583963 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerDied","Data":"928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032"} Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.583992 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerDied","Data":"a0f4a512c849aa8b080efc203f40fc92f70f7018d35ed2e40b0a8bd341bb6b8b"} Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.584011 4758 scope.go:117] "RemoveContainer" containerID="f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.584402 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.647339 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76fc974bd8-4mnvj"] Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.657441 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-76fc974bd8-4mnvj"] Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.785771 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="365b123c-aa7f-464d-b659-78154f86d42f" path="/var/lib/kubelet/pods/365b123c-aa7f-464d-b659-78154f86d42f/volumes" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.787659 4758 scope.go:117] "RemoveContainer" containerID="928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.808257 4758 scope.go:117] "RemoveContainer" containerID="f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b" Jan 30 08:53:31 crc kubenswrapper[4758]: E0130 08:53:31.808770 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b\": container with ID starting with f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b not found: ID does not exist" containerID="f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.808823 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b"} err="failed to get container status \"f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b\": rpc error: code = NotFound desc = could not find container \"f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b\": container with ID starting with f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b not found: ID does not exist" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.808847 4758 scope.go:117] "RemoveContainer" containerID="928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032" Jan 30 08:53:31 crc kubenswrapper[4758]: E0130 08:53:31.809508 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032\": container with ID starting with 928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032 not found: ID does not exist" containerID="928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.809542 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032"} err="failed to get container status \"928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032\": rpc error: code = NotFound desc = could not find container \"928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032\": container with ID starting with 928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032 not found: ID does not exist" Jan 30 08:53:32 crc kubenswrapper[4758]: I0130 08:53:32.890378 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:32 crc kubenswrapper[4758]: I0130 08:53:32.937183 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:33 crc kubenswrapper[4758]: I0130 08:53:33.884106 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zpwcm"] Jan 30 08:53:33 crc kubenswrapper[4758]: I0130 08:53:33.926334 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:53:33 crc kubenswrapper[4758]: I0130 08:53:33.926404 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:53:34 crc kubenswrapper[4758]: I0130 08:53:34.609668 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zpwcm" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="registry-server" containerID="cri-o://f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0" gracePeriod=2 Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.013650 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.013750 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.114221 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.188737 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn6cl\" (UniqueName: \"kubernetes.io/projected/c423a7ce-2dda-478f-8579-641954eee4a9-kube-api-access-cn6cl\") pod \"c423a7ce-2dda-478f-8579-641954eee4a9\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.189179 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-utilities\") pod \"c423a7ce-2dda-478f-8579-641954eee4a9\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.189673 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-catalog-content\") pod \"c423a7ce-2dda-478f-8579-641954eee4a9\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.197854 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-utilities" (OuterVolumeSpecName: "utilities") pod "c423a7ce-2dda-478f-8579-641954eee4a9" (UID: "c423a7ce-2dda-478f-8579-641954eee4a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.217396 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c423a7ce-2dda-478f-8579-641954eee4a9-kube-api-access-cn6cl" (OuterVolumeSpecName: "kube-api-access-cn6cl") pod "c423a7ce-2dda-478f-8579-641954eee4a9" (UID: "c423a7ce-2dda-478f-8579-641954eee4a9"). InnerVolumeSpecName "kube-api-access-cn6cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.292755 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.292838 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn6cl\" (UniqueName: \"kubernetes.io/projected/c423a7ce-2dda-478f-8579-641954eee4a9-kube-api-access-cn6cl\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.340960 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c423a7ce-2dda-478f-8579-641954eee4a9" (UID: "c423a7ce-2dda-478f-8579-641954eee4a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.395145 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.623296 4758 generic.go:334] "Generic (PLEG): container finished" podID="c423a7ce-2dda-478f-8579-641954eee4a9" containerID="f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0" exitCode=0 Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.623347 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.623359 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwcm" event={"ID":"c423a7ce-2dda-478f-8579-641954eee4a9","Type":"ContainerDied","Data":"f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0"} Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.623424 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwcm" event={"ID":"c423a7ce-2dda-478f-8579-641954eee4a9","Type":"ContainerDied","Data":"126ded88ff78aed69c62e79ace5301ae74868b1dd3131b082272b1efab898826"} Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.623446 4758 scope.go:117] "RemoveContainer" containerID="f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.649970 4758 scope.go:117] "RemoveContainer" containerID="c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.701502 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zpwcm"] Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.703376 4758 scope.go:117] "RemoveContainer" containerID="bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.722238 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zpwcm"] Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.729672 4758 scope.go:117] "RemoveContainer" containerID="f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0" Jan 30 08:53:35 crc kubenswrapper[4758]: E0130 08:53:35.730420 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0\": container with ID starting with f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0 not found: ID does not exist" containerID="f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.730467 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0"} err="failed to get container status \"f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0\": rpc error: code = NotFound desc = could not find container \"f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0\": container with ID starting with f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0 not found: ID does not exist" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.730495 4758 scope.go:117] "RemoveContainer" containerID="c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f" Jan 30 08:53:35 crc kubenswrapper[4758]: E0130 08:53:35.730911 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f\": container with ID starting with c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f not found: ID does not exist" containerID="c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.730936 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f"} err="failed to get container status \"c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f\": rpc error: code = NotFound desc = could not find container \"c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f\": container with ID starting with c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f not found: ID does not exist" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.730951 4758 scope.go:117] "RemoveContainer" containerID="bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307" Jan 30 08:53:35 crc kubenswrapper[4758]: E0130 08:53:35.731437 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307\": container with ID starting with bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307 not found: ID does not exist" containerID="bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.731478 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307"} err="failed to get container status \"bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307\": rpc error: code = NotFound desc = could not find container \"bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307\": container with ID starting with bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307 not found: ID does not exist" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.780191 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" path="/var/lib/kubelet/pods/c423a7ce-2dda-478f-8579-641954eee4a9/volumes" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.500786 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.625773 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-config-data\") pod \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.625857 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbrx\" (UniqueName: \"kubernetes.io/projected/0b4a1798-01a8-4e2f-8c93-ee4053777f75-kube-api-access-qqbrx\") pod \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.626061 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-combined-ca-bundle\") pod \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.632070 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b4a1798-01a8-4e2f-8c93-ee4053777f75-kube-api-access-qqbrx" (OuterVolumeSpecName: "kube-api-access-qqbrx") pod "0b4a1798-01a8-4e2f-8c93-ee4053777f75" (UID: "0b4a1798-01a8-4e2f-8c93-ee4053777f75"). InnerVolumeSpecName "kube-api-access-qqbrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.636684 4758 generic.go:334] "Generic (PLEG): container finished" podID="0b4a1798-01a8-4e2f-8c93-ee4053777f75" containerID="1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec" exitCode=137 Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.636727 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0b4a1798-01a8-4e2f-8c93-ee4053777f75","Type":"ContainerDied","Data":"1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec"} Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.636751 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0b4a1798-01a8-4e2f-8c93-ee4053777f75","Type":"ContainerDied","Data":"0facd9f58ad103e6d6bf1af4bf8a03e376d0d7dc402e1ef4b1234121e0c87d41"} Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.636767 4758 scope.go:117] "RemoveContainer" containerID="1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.637121 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.663139 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-config-data" (OuterVolumeSpecName: "config-data") pod "0b4a1798-01a8-4e2f-8c93-ee4053777f75" (UID: "0b4a1798-01a8-4e2f-8c93-ee4053777f75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.666130 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b4a1798-01a8-4e2f-8c93-ee4053777f75" (UID: "0b4a1798-01a8-4e2f-8c93-ee4053777f75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.714570 4758 scope.go:117] "RemoveContainer" containerID="1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec" Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.722921 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec\": container with ID starting with 1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec not found: ID does not exist" containerID="1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.722990 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec"} err="failed to get container status \"1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec\": rpc error: code = NotFound desc = could not find container \"1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec\": container with ID starting with 1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec not found: ID does not exist" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.729054 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.729095 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqbrx\" (UniqueName: \"kubernetes.io/projected/0b4a1798-01a8-4e2f-8c93-ee4053777f75-kube-api-access-qqbrx\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.729107 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.972100 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.980632 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.998836 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.999281 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999303 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.999321 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon-log" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999327 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon-log" Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.999341 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="registry-server" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999347 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="registry-server" Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.999358 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="extract-utilities" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999364 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="extract-utilities" Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.999373 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b4a1798-01a8-4e2f-8c93-ee4053777f75" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999379 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b4a1798-01a8-4e2f-8c93-ee4053777f75" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.999389 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="extract-content" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999395 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="extract-content" Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.999404 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999411 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.999427 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999436 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999608 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999625 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999635 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="registry-server" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999646 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon-log" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999657 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999664 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b4a1798-01a8-4e2f-8c93-ee4053777f75" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.000346 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.004989 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.005284 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.006790 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.008349 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.136139 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.136214 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgvqz\" (UniqueName: \"kubernetes.io/projected/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-kube-api-access-zgvqz\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.136380 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.136491 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.136549 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.238245 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.238606 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.238746 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgvqz\" (UniqueName: \"kubernetes.io/projected/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-kube-api-access-zgvqz\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.239295 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.239808 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.243341 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.244531 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.245096 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.247635 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.256255 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgvqz\" (UniqueName: \"kubernetes.io/projected/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-kube-api-access-zgvqz\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.329844 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.573130 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.647706 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s284j\" (UniqueName: \"kubernetes.io/projected/01bcaf88-537a-47cd-b50d-6c95e392f2a8-kube-api-access-s284j\") pod \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.647781 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-config-data\") pod \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.647900 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01bcaf88-537a-47cd-b50d-6c95e392f2a8-logs\") pod \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.647962 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-combined-ca-bundle\") pod \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.648434 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01bcaf88-537a-47cd-b50d-6c95e392f2a8-logs" (OuterVolumeSpecName: "logs") pod "01bcaf88-537a-47cd-b50d-6c95e392f2a8" (UID: "01bcaf88-537a-47cd-b50d-6c95e392f2a8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.648740 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01bcaf88-537a-47cd-b50d-6c95e392f2a8-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.649240 4758 generic.go:334] "Generic (PLEG): container finished" podID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerID="20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa" exitCode=137 Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.649495 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01bcaf88-537a-47cd-b50d-6c95e392f2a8","Type":"ContainerDied","Data":"20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa"} Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.649527 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01bcaf88-537a-47cd-b50d-6c95e392f2a8","Type":"ContainerDied","Data":"2c2f12b4fc7394d7f62cb7f9a1ba9314155b34b91f9d418d8afd7c811d330122"} Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.649572 4758 scope.go:117] "RemoveContainer" containerID="20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.649790 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.655597 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01bcaf88-537a-47cd-b50d-6c95e392f2a8-kube-api-access-s284j" (OuterVolumeSpecName: "kube-api-access-s284j") pod "01bcaf88-537a-47cd-b50d-6c95e392f2a8" (UID: "01bcaf88-537a-47cd-b50d-6c95e392f2a8"). InnerVolumeSpecName "kube-api-access-s284j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.675100 4758 scope.go:117] "RemoveContainer" containerID="508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.681096 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-config-data" (OuterVolumeSpecName: "config-data") pod "01bcaf88-537a-47cd-b50d-6c95e392f2a8" (UID: "01bcaf88-537a-47cd-b50d-6c95e392f2a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.682261 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01bcaf88-537a-47cd-b50d-6c95e392f2a8" (UID: "01bcaf88-537a-47cd-b50d-6c95e392f2a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.695280 4758 scope.go:117] "RemoveContainer" containerID="20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa" Jan 30 08:53:37 crc kubenswrapper[4758]: E0130 08:53:37.695824 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa\": container with ID starting with 20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa not found: ID does not exist" containerID="20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.695870 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa"} err="failed to get container status \"20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa\": rpc error: code = NotFound desc = could not find container \"20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa\": container with ID starting with 20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa not found: ID does not exist" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.695896 4758 scope.go:117] "RemoveContainer" containerID="508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0" Jan 30 08:53:37 crc kubenswrapper[4758]: E0130 08:53:37.696404 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0\": container with ID starting with 508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0 not found: ID does not exist" containerID="508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.696440 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0"} err="failed to get container status \"508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0\": rpc error: code = NotFound desc = could not find container \"508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0\": container with ID starting with 508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0 not found: ID does not exist" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.751308 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s284j\" (UniqueName: \"kubernetes.io/projected/01bcaf88-537a-47cd-b50d-6c95e392f2a8-kube-api-access-s284j\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.751368 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.751382 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.780205 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b4a1798-01a8-4e2f-8c93-ee4053777f75" path="/var/lib/kubelet/pods/0b4a1798-01a8-4e2f-8c93-ee4053777f75/volumes" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.798241 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:53:37 crc kubenswrapper[4758]: W0130 08:53:37.802453 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c7b05a5_3faf_4e02_9bb5_f79a4745f073.slice/crio-bf7d5aedbdc73c3e75353176340c0a278cc00b09921b43d00f86c5f18b4b50fe WatchSource:0}: Error finding container bf7d5aedbdc73c3e75353176340c0a278cc00b09921b43d00f86c5f18b4b50fe: Status 404 returned error can't find the container with id bf7d5aedbdc73c3e75353176340c0a278cc00b09921b43d00f86c5f18b4b50fe Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.981029 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.996455 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.004429 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:53:38 crc kubenswrapper[4758]: E0130 08:53:38.004842 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerName="nova-metadata-metadata" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.004859 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerName="nova-metadata-metadata" Jan 30 08:53:38 crc kubenswrapper[4758]: E0130 08:53:38.004881 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerName="nova-metadata-log" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.004888 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerName="nova-metadata-log" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.007346 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerName="nova-metadata-log" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.007380 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerName="nova-metadata-metadata" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.009422 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.025129 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.025590 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.033491 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.059346 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.059447 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfbj4\" (UniqueName: \"kubernetes.io/projected/f43adcc0-5ee5-4f9e-be48-03a532f349d5-kube-api-access-mfbj4\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.059573 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-config-data\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.059624 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.059676 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f43adcc0-5ee5-4f9e-be48-03a532f349d5-logs\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.161176 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f43adcc0-5ee5-4f9e-be48-03a532f349d5-logs\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.161302 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.161366 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfbj4\" (UniqueName: \"kubernetes.io/projected/f43adcc0-5ee5-4f9e-be48-03a532f349d5-kube-api-access-mfbj4\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.161456 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-config-data\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.161495 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.162207 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f43adcc0-5ee5-4f9e-be48-03a532f349d5-logs\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.166756 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-config-data\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.167086 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.169810 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.186334 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfbj4\" (UniqueName: \"kubernetes.io/projected/f43adcc0-5ee5-4f9e-be48-03a532f349d5-kube-api-access-mfbj4\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.326203 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.658201 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6c7b05a5-3faf-4e02-9bb5-f79a4745f073","Type":"ContainerStarted","Data":"c7b3e14079454eb9debc8b067ef17b72a65ff6eb442ad0116ed04074f155bafe"} Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.658541 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6c7b05a5-3faf-4e02-9bb5-f79a4745f073","Type":"ContainerStarted","Data":"bf7d5aedbdc73c3e75353176340c0a278cc00b09921b43d00f86c5f18b4b50fe"} Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.687208 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.68718925 podStartE2EDuration="2.68718925s" podCreationTimestamp="2026-01-30 08:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:53:38.674146242 +0000 UTC m=+1423.646457813" watchObservedRunningTime="2026-01-30 08:53:38.68718925 +0000 UTC m=+1423.659500801" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.805938 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:53:39 crc kubenswrapper[4758]: I0130 08:53:39.674093 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43adcc0-5ee5-4f9e-be48-03a532f349d5","Type":"ContainerStarted","Data":"0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01"} Jan 30 08:53:39 crc kubenswrapper[4758]: I0130 08:53:39.674650 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43adcc0-5ee5-4f9e-be48-03a532f349d5","Type":"ContainerStarted","Data":"c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de"} Jan 30 08:53:39 crc kubenswrapper[4758]: I0130 08:53:39.674668 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43adcc0-5ee5-4f9e-be48-03a532f349d5","Type":"ContainerStarted","Data":"3f9633af4e5e2d3e1abc96a29475ac2109ec7d4b547a737c821f935b6a92090b"} Jan 30 08:53:39 crc kubenswrapper[4758]: I0130 08:53:39.707717 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.707687535 podStartE2EDuration="2.707687535s" podCreationTimestamp="2026-01-30 08:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:53:39.695328139 +0000 UTC m=+1424.667639710" watchObservedRunningTime="2026-01-30 08:53:39.707687535 +0000 UTC m=+1424.679999086" Jan 30 08:53:39 crc kubenswrapper[4758]: I0130 08:53:39.780709 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" path="/var/lib/kubelet/pods/01bcaf88-537a-47cd-b50d-6c95e392f2a8/volumes" Jan 30 08:53:42 crc kubenswrapper[4758]: I0130 08:53:42.330169 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:43 crc kubenswrapper[4758]: I0130 08:53:43.326333 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 08:53:43 crc kubenswrapper[4758]: I0130 08:53:43.326620 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 08:53:43 crc kubenswrapper[4758]: I0130 08:53:43.929444 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 08:53:43 crc kubenswrapper[4758]: I0130 08:53:43.929523 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 08:53:43 crc kubenswrapper[4758]: I0130 08:53:43.930190 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 08:53:43 crc kubenswrapper[4758]: I0130 08:53:43.930214 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 08:53:43 crc kubenswrapper[4758]: I0130 08:53:43.935962 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 08:53:43 crc kubenswrapper[4758]: I0130 08:53:43.940495 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.169855 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5459cb87c-dlx4d"] Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.171669 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.199401 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5459cb87c-dlx4d"] Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.290946 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-nb\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.291066 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-config\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.291220 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-sb\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.291260 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlpl6\" (UniqueName: \"kubernetes.io/projected/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-kube-api-access-jlpl6\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.291301 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-dns-svc\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.393532 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-sb\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.393589 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlpl6\" (UniqueName: \"kubernetes.io/projected/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-kube-api-access-jlpl6\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.393630 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-dns-svc\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.393674 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-nb\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.393725 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-config\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.394436 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-sb\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.394437 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-config\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.394968 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-dns-svc\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.395254 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-nb\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.418302 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlpl6\" (UniqueName: \"kubernetes.io/projected/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-kube-api-access-jlpl6\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.509344 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:45 crc kubenswrapper[4758]: I0130 08:53:45.040878 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5459cb87c-dlx4d"] Jan 30 08:53:45 crc kubenswrapper[4758]: I0130 08:53:45.729796 4758 generic.go:334] "Generic (PLEG): container finished" podID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" containerID="c6c147de1a4a6872b323ab195d60eaec8350ddff6728805d455fbf3cd9db1508" exitCode=0 Jan 30 08:53:45 crc kubenswrapper[4758]: I0130 08:53:45.729899 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" event={"ID":"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe","Type":"ContainerDied","Data":"c6c147de1a4a6872b323ab195d60eaec8350ddff6728805d455fbf3cd9db1508"} Jan 30 08:53:45 crc kubenswrapper[4758]: I0130 08:53:45.730247 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" event={"ID":"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe","Type":"ContainerStarted","Data":"ee6038add08fd771bda22b40e423fff940fc9b5d99566b28b9e2c028c0c8fa85"} Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.740612 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" event={"ID":"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe","Type":"ContainerStarted","Data":"6e12455a439f0f64e5942ea1c92f66b043c960a09d5ba6c660cf45634c75cc34"} Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.741075 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.767165 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" podStartSLOduration=2.7671404649999998 podStartE2EDuration="2.767140465s" podCreationTimestamp="2026-01-30 08:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:53:46.759013471 +0000 UTC m=+1431.731325042" watchObservedRunningTime="2026-01-30 08:53:46.767140465 +0000 UTC m=+1431.739452036" Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.832216 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.832554 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="ceilometer-central-agent" containerID="cri-o://7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9" gracePeriod=30 Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.832653 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="ceilometer-notification-agent" containerID="cri-o://3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9" gracePeriod=30 Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.832658 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="proxy-httpd" containerID="cri-o://7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6" gracePeriod=30 Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.832671 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="sg-core" containerID="cri-o://7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c" gracePeriod=30 Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.881992 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.882352 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-log" containerID="cri-o://a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de" gracePeriod=30 Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.882371 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-api" containerID="cri-o://f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc" gracePeriod=30 Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.330825 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.356567 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.751065 4758 generic.go:334] "Generic (PLEG): container finished" podID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerID="a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de" exitCode=143 Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.751100 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"498895d5-d6fe-4ea3-ae4b-c610c53b89c3","Type":"ContainerDied","Data":"a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de"} Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.754291 4758 generic.go:334] "Generic (PLEG): container finished" podID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerID="7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6" exitCode=0 Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.754331 4758 generic.go:334] "Generic (PLEG): container finished" podID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerID="7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c" exitCode=2 Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.754340 4758 generic.go:334] "Generic (PLEG): container finished" podID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerID="7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9" exitCode=0 Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.754372 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerDied","Data":"7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6"} Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.754399 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerDied","Data":"7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c"} Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.754414 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerDied","Data":"7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9"} Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.778436 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.971394 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-pmsnh"] Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.972715 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.978405 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.978727 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.990859 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pmsnh"] Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.078095 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-config-data\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.078390 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-scripts\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.078529 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.078658 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqvlv\" (UniqueName: \"kubernetes.io/projected/aaef08b6-5771-4e63-90e2-f3eb803993ad-kube-api-access-dqvlv\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.182755 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqvlv\" (UniqueName: \"kubernetes.io/projected/aaef08b6-5771-4e63-90e2-f3eb803993ad-kube-api-access-dqvlv\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.182974 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-config-data\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.185729 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-scripts\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.185881 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.196792 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.197087 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-scripts\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.203732 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-config-data\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.204341 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqvlv\" (UniqueName: \"kubernetes.io/projected/aaef08b6-5771-4e63-90e2-f3eb803993ad-kube-api-access-dqvlv\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.304258 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.326452 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.326505 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.879445 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pmsnh"] Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.336360 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.346306 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.502859 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.619747 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-combined-ca-bundle\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.619859 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-ceilometer-tls-certs\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.619947 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-log-httpd\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.619976 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-run-httpd\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.620600 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.621101 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.622846 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-config-data\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.622901 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-sg-core-conf-yaml\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.622971 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhrnb\" (UniqueName: \"kubernetes.io/projected/a23235c6-1a6b-42cb-a434-08b7e3555915-kube-api-access-hhrnb\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.623084 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-scripts\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.623969 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.623996 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.647280 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a23235c6-1a6b-42cb-a434-08b7e3555915-kube-api-access-hhrnb" (OuterVolumeSpecName: "kube-api-access-hhrnb") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "kube-api-access-hhrnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.647613 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-scripts" (OuterVolumeSpecName: "scripts") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.716217 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.725342 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.725372 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhrnb\" (UniqueName: \"kubernetes.io/projected/a23235c6-1a6b-42cb-a434-08b7e3555915-kube-api-access-hhrnb\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.725384 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.793329 4758 generic.go:334] "Generic (PLEG): container finished" podID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerID="3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9" exitCode=0 Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.793431 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.796611 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.814258 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pmsnh" event={"ID":"aaef08b6-5771-4e63-90e2-f3eb803993ad","Type":"ContainerStarted","Data":"fe37a0352772fb36e158de913042f5d07970aede7274a69f794e11be3129cf1b"} Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.814307 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pmsnh" event={"ID":"aaef08b6-5771-4e63-90e2-f3eb803993ad","Type":"ContainerStarted","Data":"3e773042da1511a1e81355c6d888b2d542a22fa7435de49aab4644624967fc78"} Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.814360 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerDied","Data":"3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9"} Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.814382 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerDied","Data":"b5ca816011f4f5a4791f16842e9d93e8577093c69e06c21a6dbcef6e6e551521"} Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.814405 4758 scope.go:117] "RemoveContainer" containerID="7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.818484 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-pmsnh" podStartSLOduration=2.8184651130000002 podStartE2EDuration="2.818465113s" podCreationTimestamp="2026-01-30 08:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:53:49.802846324 +0000 UTC m=+1434.775157885" watchObservedRunningTime="2026-01-30 08:53:49.818465113 +0000 UTC m=+1434.790776664" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.836313 4758 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.861237 4758 scope.go:117] "RemoveContainer" containerID="7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.882176 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.892545 4758 scope.go:117] "RemoveContainer" containerID="3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.934645 4758 scope.go:117] "RemoveContainer" containerID="7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.939141 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-config-data" (OuterVolumeSpecName: "config-data") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.939241 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-config-data\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.939685 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:49 crc kubenswrapper[4758]: W0130 08:53:49.939783 4758 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a23235c6-1a6b-42cb-a434-08b7e3555915/volumes/kubernetes.io~secret/config-data Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.939800 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-config-data" (OuterVolumeSpecName: "config-data") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.963232 4758 scope.go:117] "RemoveContainer" containerID="7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6" Jan 30 08:53:49 crc kubenswrapper[4758]: E0130 08:53:49.964585 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6\": container with ID starting with 7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6 not found: ID does not exist" containerID="7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.964641 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6"} err="failed to get container status \"7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6\": rpc error: code = NotFound desc = could not find container \"7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6\": container with ID starting with 7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6 not found: ID does not exist" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.964674 4758 scope.go:117] "RemoveContainer" containerID="7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c" Jan 30 08:53:49 crc kubenswrapper[4758]: E0130 08:53:49.968060 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c\": container with ID starting with 7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c not found: ID does not exist" containerID="7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.968101 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c"} err="failed to get container status \"7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c\": rpc error: code = NotFound desc = could not find container \"7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c\": container with ID starting with 7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c not found: ID does not exist" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.968135 4758 scope.go:117] "RemoveContainer" containerID="3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9" Jan 30 08:53:49 crc kubenswrapper[4758]: E0130 08:53:49.969129 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9\": container with ID starting with 3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9 not found: ID does not exist" containerID="3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.969174 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9"} err="failed to get container status \"3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9\": rpc error: code = NotFound desc = could not find container \"3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9\": container with ID starting with 3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9 not found: ID does not exist" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.969203 4758 scope.go:117] "RemoveContainer" containerID="7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9" Jan 30 08:53:49 crc kubenswrapper[4758]: E0130 08:53:49.969533 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9\": container with ID starting with 7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9 not found: ID does not exist" containerID="7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.969561 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9"} err="failed to get container status \"7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9\": rpc error: code = NotFound desc = could not find container \"7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9\": container with ID starting with 7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9 not found: ID does not exist" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.041511 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.135815 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.155221 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.176873 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:53:50 crc kubenswrapper[4758]: E0130 08:53:50.177388 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="ceilometer-central-agent" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.177406 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="ceilometer-central-agent" Jan 30 08:53:50 crc kubenswrapper[4758]: E0130 08:53:50.177421 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="proxy-httpd" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.177427 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="proxy-httpd" Jan 30 08:53:50 crc kubenswrapper[4758]: E0130 08:53:50.177449 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="sg-core" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.177455 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="sg-core" Jan 30 08:53:50 crc kubenswrapper[4758]: E0130 08:53:50.177470 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="ceilometer-notification-agent" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.177475 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="ceilometer-notification-agent" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.177660 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="ceilometer-central-agent" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.177697 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="proxy-httpd" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.177711 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="sg-core" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.177722 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="ceilometer-notification-agent" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.179992 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.186162 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.186491 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.186617 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.204532 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.245662 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r66qc\" (UniqueName: \"kubernetes.io/projected/dc56c5be-c70d-44a7-8914-cf2e598f3333-kube-api-access-r66qc\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.245727 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.245756 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-scripts\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.245787 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-config-data\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.245866 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc56c5be-c70d-44a7-8914-cf2e598f3333-log-httpd\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.246030 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.246141 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.246414 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc56c5be-c70d-44a7-8914-cf2e598f3333-run-httpd\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.348238 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc56c5be-c70d-44a7-8914-cf2e598f3333-log-httpd\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.348321 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.348376 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.348432 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc56c5be-c70d-44a7-8914-cf2e598f3333-run-httpd\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.348507 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r66qc\" (UniqueName: \"kubernetes.io/projected/dc56c5be-c70d-44a7-8914-cf2e598f3333-kube-api-access-r66qc\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.348539 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.348561 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-scripts\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.348587 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-config-data\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.349617 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc56c5be-c70d-44a7-8914-cf2e598f3333-run-httpd\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.349888 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc56c5be-c70d-44a7-8914-cf2e598f3333-log-httpd\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.357631 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.360824 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.365179 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-config-data\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.367458 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-scripts\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.367499 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.382897 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r66qc\" (UniqueName: \"kubernetes.io/projected/dc56c5be-c70d-44a7-8914-cf2e598f3333-kube-api-access-r66qc\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.564757 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.724992 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.759552 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-config-data\") pod \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.759656 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzkbm\" (UniqueName: \"kubernetes.io/projected/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-kube-api-access-mzkbm\") pod \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.759769 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-logs\") pod \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.759831 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-combined-ca-bundle\") pod \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.779337 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-kube-api-access-mzkbm" (OuterVolumeSpecName: "kube-api-access-mzkbm") pod "498895d5-d6fe-4ea3-ae4b-c610c53b89c3" (UID: "498895d5-d6fe-4ea3-ae4b-c610c53b89c3"). InnerVolumeSpecName "kube-api-access-mzkbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.779541 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-logs" (OuterVolumeSpecName: "logs") pod "498895d5-d6fe-4ea3-ae4b-c610c53b89c3" (UID: "498895d5-d6fe-4ea3-ae4b-c610c53b89c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.811667 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-config-data" (OuterVolumeSpecName: "config-data") pod "498895d5-d6fe-4ea3-ae4b-c610c53b89c3" (UID: "498895d5-d6fe-4ea3-ae4b-c610c53b89c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.821724 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "498895d5-d6fe-4ea3-ae4b-c610c53b89c3" (UID: "498895d5-d6fe-4ea3-ae4b-c610c53b89c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.845641 4758 generic.go:334] "Generic (PLEG): container finished" podID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerID="f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc" exitCode=0 Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.845749 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.846586 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"498895d5-d6fe-4ea3-ae4b-c610c53b89c3","Type":"ContainerDied","Data":"f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc"} Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.846618 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"498895d5-d6fe-4ea3-ae4b-c610c53b89c3","Type":"ContainerDied","Data":"73e4290d9c9ecf7bf09bd3149f6a065ae76b3173eaddcd9c2fd875f062240283"} Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.846636 4758 scope.go:117] "RemoveContainer" containerID="f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.864378 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzkbm\" (UniqueName: \"kubernetes.io/projected/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-kube-api-access-mzkbm\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.864408 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.864419 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.864427 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.881249 4758 scope.go:117] "RemoveContainer" containerID="a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.909855 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.920502 4758 scope.go:117] "RemoveContainer" containerID="f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc" Jan 30 08:53:50 crc kubenswrapper[4758]: E0130 08:53:50.921251 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc\": container with ID starting with f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc not found: ID does not exist" containerID="f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.921293 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc"} err="failed to get container status \"f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc\": rpc error: code = NotFound desc = could not find container \"f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc\": container with ID starting with f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc not found: ID does not exist" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.921319 4758 scope.go:117] "RemoveContainer" containerID="a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de" Jan 30 08:53:50 crc kubenswrapper[4758]: E0130 08:53:50.935021 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de\": container with ID starting with a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de not found: ID does not exist" containerID="a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.935074 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de"} err="failed to get container status \"a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de\": rpc error: code = NotFound desc = could not find container \"a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de\": container with ID starting with a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de not found: ID does not exist" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.943425 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.972096 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:50 crc kubenswrapper[4758]: E0130 08:53:50.978468 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-api" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.978504 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-api" Jan 30 08:53:50 crc kubenswrapper[4758]: E0130 08:53:50.978529 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-log" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.978537 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-log" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.978721 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-log" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.978734 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-api" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.979811 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.983966 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.985625 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.990264 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.020367 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.069264 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6605f787-f5cc-41f5-bae7-0e1f2006fda5-logs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.069414 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-config-data\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.069457 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4rn2\" (UniqueName: \"kubernetes.io/projected/6605f787-f5cc-41f5-bae7-0e1f2006fda5-kube-api-access-c4rn2\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.069520 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.069572 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.069623 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-public-tls-certs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.129337 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.172439 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.173325 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-public-tls-certs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.173617 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6605f787-f5cc-41f5-bae7-0e1f2006fda5-logs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.173862 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-config-data\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.173906 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4rn2\" (UniqueName: \"kubernetes.io/projected/6605f787-f5cc-41f5-bae7-0e1f2006fda5-kube-api-access-c4rn2\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.173988 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.174998 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6605f787-f5cc-41f5-bae7-0e1f2006fda5-logs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.180750 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.183971 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-public-tls-certs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.188707 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-config-data\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.205569 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.213685 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4rn2\" (UniqueName: \"kubernetes.io/projected/6605f787-f5cc-41f5-bae7-0e1f2006fda5-kube-api-access-c4rn2\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.317479 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.780516 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" path="/var/lib/kubelet/pods/498895d5-d6fe-4ea3-ae4b-c610c53b89c3/volumes" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.781573 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" path="/var/lib/kubelet/pods/a23235c6-1a6b-42cb-a434-08b7e3555915/volumes" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.855622 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc56c5be-c70d-44a7-8914-cf2e598f3333","Type":"ContainerStarted","Data":"eba56efc1d0c28b01b9d9446966eeb05317a23c38164905823ab78573a752d66"} Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.954121 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:52 crc kubenswrapper[4758]: I0130 08:53:52.868314 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc56c5be-c70d-44a7-8914-cf2e598f3333","Type":"ContainerStarted","Data":"8ce990c1f83055815056fb655ee6a37afaf0dbac152edb3b1fc3adc7454462ec"} Jan 30 08:53:52 crc kubenswrapper[4758]: I0130 08:53:52.870583 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6605f787-f5cc-41f5-bae7-0e1f2006fda5","Type":"ContainerStarted","Data":"2bca0cb3dd5686f7baa8259f16b57055e85a1ec675df5d7971b68df4de192b48"} Jan 30 08:53:52 crc kubenswrapper[4758]: I0130 08:53:52.870689 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6605f787-f5cc-41f5-bae7-0e1f2006fda5","Type":"ContainerStarted","Data":"63a36fb4f1204bc3ff16b813edfb2539b826c4c6889ef83583df5d90e6712ca2"} Jan 30 08:53:52 crc kubenswrapper[4758]: I0130 08:53:52.870761 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6605f787-f5cc-41f5-bae7-0e1f2006fda5","Type":"ContainerStarted","Data":"f554ad2db5ef82928f9d27ccb5db688bfe0dd035e3eb8df1d1564fab173bc5d5"} Jan 30 08:53:52 crc kubenswrapper[4758]: I0130 08:53:52.891432 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.8914135869999997 podStartE2EDuration="2.891413587s" podCreationTimestamp="2026-01-30 08:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:53:52.88957582 +0000 UTC m=+1437.861887381" watchObservedRunningTime="2026-01-30 08:53:52.891413587 +0000 UTC m=+1437.863725138" Jan 30 08:53:53 crc kubenswrapper[4758]: I0130 08:53:53.890435 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc56c5be-c70d-44a7-8914-cf2e598f3333","Type":"ContainerStarted","Data":"33872e0781c7bc6b3e73e245f915b77e2966fe7ed0f85266a4c1d5bc8c30f441"} Jan 30 08:53:54 crc kubenswrapper[4758]: I0130 08:53:54.512277 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:54 crc kubenswrapper[4758]: I0130 08:53:54.617186 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6ccb6797-l4vl5"] Jan 30 08:53:54 crc kubenswrapper[4758]: I0130 08:53:54.619185 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" podUID="88205fec-5592-41f1-a351-daf34b97add7" containerName="dnsmasq-dns" containerID="cri-o://dce4b9b027663af6d3ed749e75b2800742d3649e7fa7c5291abccf826e64b995" gracePeriod=10 Jan 30 08:53:54 crc kubenswrapper[4758]: I0130 08:53:54.901746 4758 generic.go:334] "Generic (PLEG): container finished" podID="88205fec-5592-41f1-a351-daf34b97add7" containerID="dce4b9b027663af6d3ed749e75b2800742d3649e7fa7c5291abccf826e64b995" exitCode=0 Jan 30 08:53:54 crc kubenswrapper[4758]: I0130 08:53:54.901791 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" event={"ID":"88205fec-5592-41f1-a351-daf34b97add7","Type":"ContainerDied","Data":"dce4b9b027663af6d3ed749e75b2800742d3649e7fa7c5291abccf826e64b995"} Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.719682 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.802788 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-dns-svc\") pod \"88205fec-5592-41f1-a351-daf34b97add7\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.803346 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwttp\" (UniqueName: \"kubernetes.io/projected/88205fec-5592-41f1-a351-daf34b97add7-kube-api-access-gwttp\") pod \"88205fec-5592-41f1-a351-daf34b97add7\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.803637 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-config\") pod \"88205fec-5592-41f1-a351-daf34b97add7\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.803757 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-sb\") pod \"88205fec-5592-41f1-a351-daf34b97add7\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.804780 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-nb\") pod \"88205fec-5592-41f1-a351-daf34b97add7\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.835496 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88205fec-5592-41f1-a351-daf34b97add7-kube-api-access-gwttp" (OuterVolumeSpecName: "kube-api-access-gwttp") pod "88205fec-5592-41f1-a351-daf34b97add7" (UID: "88205fec-5592-41f1-a351-daf34b97add7"). InnerVolumeSpecName "kube-api-access-gwttp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.878594 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-config" (OuterVolumeSpecName: "config") pod "88205fec-5592-41f1-a351-daf34b97add7" (UID: "88205fec-5592-41f1-a351-daf34b97add7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.899737 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "88205fec-5592-41f1-a351-daf34b97add7" (UID: "88205fec-5592-41f1-a351-daf34b97add7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.905375 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "88205fec-5592-41f1-a351-daf34b97add7" (UID: "88205fec-5592-41f1-a351-daf34b97add7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.913477 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.913515 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwttp\" (UniqueName: \"kubernetes.io/projected/88205fec-5592-41f1-a351-daf34b97add7-kube-api-access-gwttp\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.913532 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.913544 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.926078 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc56c5be-c70d-44a7-8914-cf2e598f3333","Type":"ContainerStarted","Data":"55c5b8766354c39b21c6bf27726094e571c7c205d6b744a36a334ccbe8bfce0f"} Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.932552 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" event={"ID":"88205fec-5592-41f1-a351-daf34b97add7","Type":"ContainerDied","Data":"44f8b069d0be92a8bc294a5f42966c64f273339f26c92d95f7dfd0fd9149bca0"} Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.933265 4758 scope.go:117] "RemoveContainer" containerID="dce4b9b027663af6d3ed749e75b2800742d3649e7fa7c5291abccf826e64b995" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.932856 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.996016 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "88205fec-5592-41f1-a351-daf34b97add7" (UID: "88205fec-5592-41f1-a351-daf34b97add7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:56 crc kubenswrapper[4758]: I0130 08:53:56.008582 4758 scope.go:117] "RemoveContainer" containerID="a9b45fc847ddf846195914d0ffbe6a09a108519311beafcfe30dbeedd9e7ad3b" Jan 30 08:53:56 crc kubenswrapper[4758]: I0130 08:53:56.020246 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:56 crc kubenswrapper[4758]: I0130 08:53:56.271842 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6ccb6797-l4vl5"] Jan 30 08:53:56 crc kubenswrapper[4758]: I0130 08:53:56.281291 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c6ccb6797-l4vl5"] Jan 30 08:53:57 crc kubenswrapper[4758]: I0130 08:53:57.778031 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88205fec-5592-41f1-a351-daf34b97add7" path="/var/lib/kubelet/pods/88205fec-5592-41f1-a351-daf34b97add7/volumes" Jan 30 08:53:57 crc kubenswrapper[4758]: I0130 08:53:57.953245 4758 generic.go:334] "Generic (PLEG): container finished" podID="aaef08b6-5771-4e63-90e2-f3eb803993ad" containerID="fe37a0352772fb36e158de913042f5d07970aede7274a69f794e11be3129cf1b" exitCode=0 Jan 30 08:53:57 crc kubenswrapper[4758]: I0130 08:53:57.953316 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pmsnh" event={"ID":"aaef08b6-5771-4e63-90e2-f3eb803993ad","Type":"ContainerDied","Data":"fe37a0352772fb36e158de913042f5d07970aede7274a69f794e11be3129cf1b"} Jan 30 08:53:57 crc kubenswrapper[4758]: I0130 08:53:57.958759 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc56c5be-c70d-44a7-8914-cf2e598f3333","Type":"ContainerStarted","Data":"c9dc5d25c811caadd45d847e47abe0897febb24c5d995725ae457fcb51d63fe3"} Jan 30 08:53:57 crc kubenswrapper[4758]: I0130 08:53:57.959015 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:53:57 crc kubenswrapper[4758]: I0130 08:53:57.996818 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.110274981 podStartE2EDuration="7.996800246s" podCreationTimestamp="2026-01-30 08:53:50 +0000 UTC" firstStartedPulling="2026-01-30 08:53:51.152566869 +0000 UTC m=+1436.124878420" lastFinishedPulling="2026-01-30 08:53:57.039086204 +0000 UTC m=+1442.011403685" observedRunningTime="2026-01-30 08:53:57.988880677 +0000 UTC m=+1442.961192228" watchObservedRunningTime="2026-01-30 08:53:57.996800246 +0000 UTC m=+1442.969111797" Jan 30 08:53:58 crc kubenswrapper[4758]: I0130 08:53:58.333812 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 08:53:58 crc kubenswrapper[4758]: I0130 08:53:58.335386 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 08:53:58 crc kubenswrapper[4758]: I0130 08:53:58.342210 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 08:53:58 crc kubenswrapper[4758]: I0130 08:53:58.978380 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.412340 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.487990 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-config-data\") pod \"aaef08b6-5771-4e63-90e2-f3eb803993ad\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.488163 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-combined-ca-bundle\") pod \"aaef08b6-5771-4e63-90e2-f3eb803993ad\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.488206 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-scripts\") pod \"aaef08b6-5771-4e63-90e2-f3eb803993ad\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.488234 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqvlv\" (UniqueName: \"kubernetes.io/projected/aaef08b6-5771-4e63-90e2-f3eb803993ad-kube-api-access-dqvlv\") pod \"aaef08b6-5771-4e63-90e2-f3eb803993ad\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.496176 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-scripts" (OuterVolumeSpecName: "scripts") pod "aaef08b6-5771-4e63-90e2-f3eb803993ad" (UID: "aaef08b6-5771-4e63-90e2-f3eb803993ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.497576 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaef08b6-5771-4e63-90e2-f3eb803993ad-kube-api-access-dqvlv" (OuterVolumeSpecName: "kube-api-access-dqvlv") pod "aaef08b6-5771-4e63-90e2-f3eb803993ad" (UID: "aaef08b6-5771-4e63-90e2-f3eb803993ad"). InnerVolumeSpecName "kube-api-access-dqvlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.523789 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-config-data" (OuterVolumeSpecName: "config-data") pod "aaef08b6-5771-4e63-90e2-f3eb803993ad" (UID: "aaef08b6-5771-4e63-90e2-f3eb803993ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.536225 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaef08b6-5771-4e63-90e2-f3eb803993ad" (UID: "aaef08b6-5771-4e63-90e2-f3eb803993ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.590996 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.591046 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.591058 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqvlv\" (UniqueName: \"kubernetes.io/projected/aaef08b6-5771-4e63-90e2-f3eb803993ad-kube-api-access-dqvlv\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.591069 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.978726 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.980677 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pmsnh" event={"ID":"aaef08b6-5771-4e63-90e2-f3eb803993ad","Type":"ContainerDied","Data":"3e773042da1511a1e81355c6d888b2d542a22fa7435de49aab4644624967fc78"} Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.980829 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e773042da1511a1e81355c6d888b2d542a22fa7435de49aab4644624967fc78" Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.092089 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.092624 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="015d0689-8130-4f19-bb79-866766c02c63" containerName="nova-scheduler-scheduler" containerID="cri-o://d070fa2fc47120d90dd29347010dd2bd317f21d5673f4354507acac0096009be" gracePeriod=30 Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.155073 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.155319 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerName="nova-api-log" containerID="cri-o://63a36fb4f1204bc3ff16b813edfb2539b826c4c6889ef83583df5d90e6712ca2" gracePeriod=30 Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.155456 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerName="nova-api-api" containerID="cri-o://2bca0cb3dd5686f7baa8259f16b57055e85a1ec675df5d7971b68df4de192b48" gracePeriod=30 Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.214402 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.988921 4758 generic.go:334] "Generic (PLEG): container finished" podID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerID="2bca0cb3dd5686f7baa8259f16b57055e85a1ec675df5d7971b68df4de192b48" exitCode=0 Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.988947 4758 generic.go:334] "Generic (PLEG): container finished" podID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerID="63a36fb4f1204bc3ff16b813edfb2539b826c4c6889ef83583df5d90e6712ca2" exitCode=143 Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.989803 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6605f787-f5cc-41f5-bae7-0e1f2006fda5","Type":"ContainerDied","Data":"2bca0cb3dd5686f7baa8259f16b57055e85a1ec675df5d7971b68df4de192b48"} Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.989833 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6605f787-f5cc-41f5-bae7-0e1f2006fda5","Type":"ContainerDied","Data":"63a36fb4f1204bc3ff16b813edfb2539b826c4c6889ef83583df5d90e6712ca2"} Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.078723 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.242641 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-combined-ca-bundle\") pod \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.242706 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-internal-tls-certs\") pod \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.242756 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4rn2\" (UniqueName: \"kubernetes.io/projected/6605f787-f5cc-41f5-bae7-0e1f2006fda5-kube-api-access-c4rn2\") pod \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.242959 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-public-tls-certs\") pod \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.242984 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-config-data\") pod \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.243121 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6605f787-f5cc-41f5-bae7-0e1f2006fda5-logs\") pod \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.243882 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6605f787-f5cc-41f5-bae7-0e1f2006fda5-logs" (OuterVolumeSpecName: "logs") pod "6605f787-f5cc-41f5-bae7-0e1f2006fda5" (UID: "6605f787-f5cc-41f5-bae7-0e1f2006fda5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.250191 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6605f787-f5cc-41f5-bae7-0e1f2006fda5-kube-api-access-c4rn2" (OuterVolumeSpecName: "kube-api-access-c4rn2") pod "6605f787-f5cc-41f5-bae7-0e1f2006fda5" (UID: "6605f787-f5cc-41f5-bae7-0e1f2006fda5"). InnerVolumeSpecName "kube-api-access-c4rn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.277294 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-config-data" (OuterVolumeSpecName: "config-data") pod "6605f787-f5cc-41f5-bae7-0e1f2006fda5" (UID: "6605f787-f5cc-41f5-bae7-0e1f2006fda5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.306380 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6605f787-f5cc-41f5-bae7-0e1f2006fda5" (UID: "6605f787-f5cc-41f5-bae7-0e1f2006fda5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.308622 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6605f787-f5cc-41f5-bae7-0e1f2006fda5" (UID: "6605f787-f5cc-41f5-bae7-0e1f2006fda5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.325747 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6605f787-f5cc-41f5-bae7-0e1f2006fda5" (UID: "6605f787-f5cc-41f5-bae7-0e1f2006fda5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.345697 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.345735 4758 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.345748 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4rn2\" (UniqueName: \"kubernetes.io/projected/6605f787-f5cc-41f5-bae7-0e1f2006fda5-kube-api-access-c4rn2\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.345762 4758 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.345772 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.345781 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6605f787-f5cc-41f5-bae7-0e1f2006fda5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:01 crc kubenswrapper[4758]: E0130 08:54:01.894168 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d070fa2fc47120d90dd29347010dd2bd317f21d5673f4354507acac0096009be" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 08:54:01 crc kubenswrapper[4758]: E0130 08:54:01.895466 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d070fa2fc47120d90dd29347010dd2bd317f21d5673f4354507acac0096009be" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 08:54:01 crc kubenswrapper[4758]: E0130 08:54:01.896519 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d070fa2fc47120d90dd29347010dd2bd317f21d5673f4354507acac0096009be" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 08:54:01 crc kubenswrapper[4758]: E0130 08:54:01.896565 4758 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="015d0689-8130-4f19-bb79-866766c02c63" containerName="nova-scheduler-scheduler" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.000779 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6605f787-f5cc-41f5-bae7-0e1f2006fda5","Type":"ContainerDied","Data":"f554ad2db5ef82928f9d27ccb5db688bfe0dd035e3eb8df1d1564fab173bc5d5"} Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.000855 4758 scope.go:117] "RemoveContainer" containerID="2bca0cb3dd5686f7baa8259f16b57055e85a1ec675df5d7971b68df4de192b48" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.000801 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.001064 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-metadata" containerID="cri-o://0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01" gracePeriod=30 Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.000851 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-log" containerID="cri-o://c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de" gracePeriod=30 Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.038241 4758 scope.go:117] "RemoveContainer" containerID="63a36fb4f1204bc3ff16b813edfb2539b826c4c6889ef83583df5d90e6712ca2" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.042244 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.054363 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.085106 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 08:54:02 crc kubenswrapper[4758]: E0130 08:54:02.085588 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaef08b6-5771-4e63-90e2-f3eb803993ad" containerName="nova-manage" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.085608 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaef08b6-5771-4e63-90e2-f3eb803993ad" containerName="nova-manage" Jan 30 08:54:02 crc kubenswrapper[4758]: E0130 08:54:02.085636 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88205fec-5592-41f1-a351-daf34b97add7" containerName="init" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.085644 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="88205fec-5592-41f1-a351-daf34b97add7" containerName="init" Jan 30 08:54:02 crc kubenswrapper[4758]: E0130 08:54:02.085664 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerName="nova-api-log" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.085673 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerName="nova-api-log" Jan 30 08:54:02 crc kubenswrapper[4758]: E0130 08:54:02.085689 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88205fec-5592-41f1-a351-daf34b97add7" containerName="dnsmasq-dns" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.085696 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="88205fec-5592-41f1-a351-daf34b97add7" containerName="dnsmasq-dns" Jan 30 08:54:02 crc kubenswrapper[4758]: E0130 08:54:02.085720 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerName="nova-api-api" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.085729 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerName="nova-api-api" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.085980 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerName="nova-api-log" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.086006 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="88205fec-5592-41f1-a351-daf34b97add7" containerName="dnsmasq-dns" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.086026 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerName="nova-api-api" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.086071 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaef08b6-5771-4e63-90e2-f3eb803993ad" containerName="nova-manage" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.087140 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.095122 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.095355 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.099291 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.154389 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.161372 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-config-data\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.161500 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.161630 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.161769 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dr4f\" (UniqueName: \"kubernetes.io/projected/fd2d2fe7-5dac-4f3b-80f6-650712925495-kube-api-access-2dr4f\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.161832 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.161911 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd2d2fe7-5dac-4f3b-80f6-650712925495-logs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.264055 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.264134 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dr4f\" (UniqueName: \"kubernetes.io/projected/fd2d2fe7-5dac-4f3b-80f6-650712925495-kube-api-access-2dr4f\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.264164 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.264208 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd2d2fe7-5dac-4f3b-80f6-650712925495-logs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.264295 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-config-data\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.264324 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.265182 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd2d2fe7-5dac-4f3b-80f6-650712925495-logs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.269991 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-config-data\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.270961 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.273293 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.282497 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.283427 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dr4f\" (UniqueName: \"kubernetes.io/projected/fd2d2fe7-5dac-4f3b-80f6-650712925495-kube-api-access-2dr4f\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.552480 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.013539 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.017470 4758 generic.go:334] "Generic (PLEG): container finished" podID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerID="c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de" exitCode=143 Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.017520 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43adcc0-5ee5-4f9e-be48-03a532f349d5","Type":"ContainerDied","Data":"c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de"} Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.019766 4758 generic.go:334] "Generic (PLEG): container finished" podID="015d0689-8130-4f19-bb79-866766c02c63" containerID="d070fa2fc47120d90dd29347010dd2bd317f21d5673f4354507acac0096009be" exitCode=0 Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.019798 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"015d0689-8130-4f19-bb79-866766c02c63","Type":"ContainerDied","Data":"d070fa2fc47120d90dd29347010dd2bd317f21d5673f4354507acac0096009be"} Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.019818 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"015d0689-8130-4f19-bb79-866766c02c63","Type":"ContainerDied","Data":"7df77a789ad68a7f771f50962b45656f81dbca922fc56cf93ed7fd78a2640d9a"} Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.019831 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7df77a789ad68a7f771f50962b45656f81dbca922fc56cf93ed7fd78a2640d9a" Jan 30 08:54:03 crc kubenswrapper[4758]: W0130 08:54:03.027970 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd2d2fe7_5dac_4f3b_80f6_650712925495.slice/crio-8f74332b07996cff995e0f4eb922bf4fa08d3d7e27a7800fe6c3061a6c53a933 WatchSource:0}: Error finding container 8f74332b07996cff995e0f4eb922bf4fa08d3d7e27a7800fe6c3061a6c53a933: Status 404 returned error can't find the container with id 8f74332b07996cff995e0f4eb922bf4fa08d3d7e27a7800fe6c3061a6c53a933 Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.103228 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.191386 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-config-data\") pod \"015d0689-8130-4f19-bb79-866766c02c63\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.191527 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq64w\" (UniqueName: \"kubernetes.io/projected/015d0689-8130-4f19-bb79-866766c02c63-kube-api-access-jq64w\") pod \"015d0689-8130-4f19-bb79-866766c02c63\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.191620 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-combined-ca-bundle\") pod \"015d0689-8130-4f19-bb79-866766c02c63\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.198336 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015d0689-8130-4f19-bb79-866766c02c63-kube-api-access-jq64w" (OuterVolumeSpecName: "kube-api-access-jq64w") pod "015d0689-8130-4f19-bb79-866766c02c63" (UID: "015d0689-8130-4f19-bb79-866766c02c63"). InnerVolumeSpecName "kube-api-access-jq64w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.229885 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-config-data" (OuterVolumeSpecName: "config-data") pod "015d0689-8130-4f19-bb79-866766c02c63" (UID: "015d0689-8130-4f19-bb79-866766c02c63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.236592 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "015d0689-8130-4f19-bb79-866766c02c63" (UID: "015d0689-8130-4f19-bb79-866766c02c63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.294256 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.294519 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.294602 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq64w\" (UniqueName: \"kubernetes.io/projected/015d0689-8130-4f19-bb79-866766c02c63-kube-api-access-jq64w\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.780408 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" path="/var/lib/kubelet/pods/6605f787-f5cc-41f5-bae7-0e1f2006fda5/volumes" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.030595 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.031522 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd2d2fe7-5dac-4f3b-80f6-650712925495","Type":"ContainerStarted","Data":"c19d0359d077740e2a60423426de85f1e9e4215d4e0192236ad6bd8ce44d37ab"} Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.031577 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd2d2fe7-5dac-4f3b-80f6-650712925495","Type":"ContainerStarted","Data":"e30080425fe17c9c7171805e7cf3a30d7971aea5633984ff5581cd7e852e852e"} Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.031589 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd2d2fe7-5dac-4f3b-80f6-650712925495","Type":"ContainerStarted","Data":"8f74332b07996cff995e0f4eb922bf4fa08d3d7e27a7800fe6c3061a6c53a933"} Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.075718 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.075694168 podStartE2EDuration="2.075694168s" podCreationTimestamp="2026-01-30 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:54:04.057649864 +0000 UTC m=+1449.029961415" watchObservedRunningTime="2026-01-30 08:54:04.075694168 +0000 UTC m=+1449.048005719" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.089499 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.109309 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.122665 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:54:04 crc kubenswrapper[4758]: E0130 08:54:04.123162 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015d0689-8130-4f19-bb79-866766c02c63" containerName="nova-scheduler-scheduler" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.123185 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="015d0689-8130-4f19-bb79-866766c02c63" containerName="nova-scheduler-scheduler" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.123410 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="015d0689-8130-4f19-bb79-866766c02c63" containerName="nova-scheduler-scheduler" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.124161 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.131425 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.135163 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.223437 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9fc9\" (UniqueName: \"kubernetes.io/projected/4697d282-598e-4faa-ae13-6ba6d3747bf0-kube-api-access-b9fc9\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.223600 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4697d282-598e-4faa-ae13-6ba6d3747bf0-config-data\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.223652 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4697d282-598e-4faa-ae13-6ba6d3747bf0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.324926 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4697d282-598e-4faa-ae13-6ba6d3747bf0-config-data\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.325103 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4697d282-598e-4faa-ae13-6ba6d3747bf0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.325186 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9fc9\" (UniqueName: \"kubernetes.io/projected/4697d282-598e-4faa-ae13-6ba6d3747bf0-kube-api-access-b9fc9\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.332100 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4697d282-598e-4faa-ae13-6ba6d3747bf0-config-data\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.332153 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4697d282-598e-4faa-ae13-6ba6d3747bf0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.344582 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9fc9\" (UniqueName: \"kubernetes.io/projected/4697d282-598e-4faa-ae13-6ba6d3747bf0-kube-api-access-b9fc9\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.441866 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.913755 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:54:04 crc kubenswrapper[4758]: W0130 08:54:04.916497 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4697d282_598e_4faa_ae13_6ba6d3747bf0.slice/crio-21b50cab33aed98c77936b67b448722f0ab06ad8426dfa4f790aa794e05db43d WatchSource:0}: Error finding container 21b50cab33aed98c77936b67b448722f0ab06ad8426dfa4f790aa794e05db43d: Status 404 returned error can't find the container with id 21b50cab33aed98c77936b67b448722f0ab06ad8426dfa4f790aa794e05db43d Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.040643 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4697d282-598e-4faa-ae13-6ba6d3747bf0","Type":"ContainerStarted","Data":"21b50cab33aed98c77936b67b448722f0ab06ad8426dfa4f790aa794e05db43d"} Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.190349 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": read tcp 10.217.0.2:41606->10.217.0.202:8775: read: connection reset by peer" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.190408 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": read tcp 10.217.0.2:41620->10.217.0.202:8775: read: connection reset by peer" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.603892 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.756861 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-config-data\") pod \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.756966 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfbj4\" (UniqueName: \"kubernetes.io/projected/f43adcc0-5ee5-4f9e-be48-03a532f349d5-kube-api-access-mfbj4\") pod \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.757019 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-combined-ca-bundle\") pod \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.757115 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-nova-metadata-tls-certs\") pod \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.757174 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f43adcc0-5ee5-4f9e-be48-03a532f349d5-logs\") pod \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.758153 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f43adcc0-5ee5-4f9e-be48-03a532f349d5-logs" (OuterVolumeSpecName: "logs") pod "f43adcc0-5ee5-4f9e-be48-03a532f349d5" (UID: "f43adcc0-5ee5-4f9e-be48-03a532f349d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.765244 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f43adcc0-5ee5-4f9e-be48-03a532f349d5-kube-api-access-mfbj4" (OuterVolumeSpecName: "kube-api-access-mfbj4") pod "f43adcc0-5ee5-4f9e-be48-03a532f349d5" (UID: "f43adcc0-5ee5-4f9e-be48-03a532f349d5"). InnerVolumeSpecName "kube-api-access-mfbj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.782819 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="015d0689-8130-4f19-bb79-866766c02c63" path="/var/lib/kubelet/pods/015d0689-8130-4f19-bb79-866766c02c63/volumes" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.802148 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f43adcc0-5ee5-4f9e-be48-03a532f349d5" (UID: "f43adcc0-5ee5-4f9e-be48-03a532f349d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.814229 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-config-data" (OuterVolumeSpecName: "config-data") pod "f43adcc0-5ee5-4f9e-be48-03a532f349d5" (UID: "f43adcc0-5ee5-4f9e-be48-03a532f349d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.829348 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f43adcc0-5ee5-4f9e-be48-03a532f349d5" (UID: "f43adcc0-5ee5-4f9e-be48-03a532f349d5"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.859577 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f43adcc0-5ee5-4f9e-be48-03a532f349d5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.859614 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.859625 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfbj4\" (UniqueName: \"kubernetes.io/projected/f43adcc0-5ee5-4f9e-be48-03a532f349d5-kube-api-access-mfbj4\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.859639 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.859647 4758 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.055386 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4697d282-598e-4faa-ae13-6ba6d3747bf0","Type":"ContainerStarted","Data":"9f85b83ab2936da665dbbbbdc245073acfd0557b87921496b93ff6472a4059ec"} Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.060581 4758 generic.go:334] "Generic (PLEG): container finished" podID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerID="0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01" exitCode=0 Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.060636 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43adcc0-5ee5-4f9e-be48-03a532f349d5","Type":"ContainerDied","Data":"0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01"} Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.060690 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.060708 4758 scope.go:117] "RemoveContainer" containerID="0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.060669 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43adcc0-5ee5-4f9e-be48-03a532f349d5","Type":"ContainerDied","Data":"3f9633af4e5e2d3e1abc96a29475ac2109ec7d4b547a737c821f935b6a92090b"} Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.075240 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.075219522 podStartE2EDuration="2.075219522s" podCreationTimestamp="2026-01-30 08:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:54:06.073331823 +0000 UTC m=+1451.045643384" watchObservedRunningTime="2026-01-30 08:54:06.075219522 +0000 UTC m=+1451.047531073" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.118252 4758 scope.go:117] "RemoveContainer" containerID="c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.123822 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.168107 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.183218 4758 scope.go:117] "RemoveContainer" containerID="0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01" Jan 30 08:54:06 crc kubenswrapper[4758]: E0130 08:54:06.187176 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01\": container with ID starting with 0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01 not found: ID does not exist" containerID="0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.187221 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01"} err="failed to get container status \"0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01\": rpc error: code = NotFound desc = could not find container \"0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01\": container with ID starting with 0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01 not found: ID does not exist" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.187252 4758 scope.go:117] "RemoveContainer" containerID="c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.189877 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:54:06 crc kubenswrapper[4758]: E0130 08:54:06.190363 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-metadata" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.190390 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-metadata" Jan 30 08:54:06 crc kubenswrapper[4758]: E0130 08:54:06.190413 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-log" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.190422 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-log" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.190846 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-metadata" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.190873 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-log" Jan 30 08:54:06 crc kubenswrapper[4758]: E0130 08:54:06.194191 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de\": container with ID starting with c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de not found: ID does not exist" containerID="c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.194237 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de"} err="failed to get container status \"c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de\": rpc error: code = NotFound desc = could not find container \"c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de\": container with ID starting with c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de not found: ID does not exist" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.199394 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.207515 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.207770 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.217135 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.374692 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.374747 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4213c10-dde9-4a4d-9af9-304dd08f755c-logs\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.374938 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-config-data\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.375226 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.375258 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h28t5\" (UniqueName: \"kubernetes.io/projected/e4213c10-dde9-4a4d-9af9-304dd08f755c-kube-api-access-h28t5\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.477130 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.477183 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h28t5\" (UniqueName: \"kubernetes.io/projected/e4213c10-dde9-4a4d-9af9-304dd08f755c-kube-api-access-h28t5\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.477254 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.477279 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4213c10-dde9-4a4d-9af9-304dd08f755c-logs\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.477329 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-config-data\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.477873 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4213c10-dde9-4a4d-9af9-304dd08f755c-logs\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.480848 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-config-data\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.481596 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.493397 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.494620 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h28t5\" (UniqueName: \"kubernetes.io/projected/e4213c10-dde9-4a4d-9af9-304dd08f755c-kube-api-access-h28t5\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.542233 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:54:07 crc kubenswrapper[4758]: I0130 08:54:07.008263 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:54:07 crc kubenswrapper[4758]: I0130 08:54:07.079068 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4213c10-dde9-4a4d-9af9-304dd08f755c","Type":"ContainerStarted","Data":"b48e69a863bfe56b316184e825346d6d8c42327738aa1a0e6ca8c7c6ee240f28"} Jan 30 08:54:07 crc kubenswrapper[4758]: I0130 08:54:07.781635 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" path="/var/lib/kubelet/pods/f43adcc0-5ee5-4f9e-be48-03a532f349d5/volumes" Jan 30 08:54:08 crc kubenswrapper[4758]: I0130 08:54:08.096688 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4213c10-dde9-4a4d-9af9-304dd08f755c","Type":"ContainerStarted","Data":"1c7e9abae1d0bce4140864c774429f23c5f17433398beab54a53e1b8c1c9cb75"} Jan 30 08:54:08 crc kubenswrapper[4758]: I0130 08:54:08.096734 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4213c10-dde9-4a4d-9af9-304dd08f755c","Type":"ContainerStarted","Data":"88526735d3d629820f39fae8305908f787e6d180fcc7b15039c90f05d5fdfed9"} Jan 30 08:54:08 crc kubenswrapper[4758]: I0130 08:54:08.120760 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.120718243 podStartE2EDuration="2.120718243s" podCreationTimestamp="2026-01-30 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:54:08.117498653 +0000 UTC m=+1453.089810214" watchObservedRunningTime="2026-01-30 08:54:08.120718243 +0000 UTC m=+1453.093029794" Jan 30 08:54:09 crc kubenswrapper[4758]: I0130 08:54:09.442412 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 08:54:11 crc kubenswrapper[4758]: I0130 08:54:11.542641 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 08:54:11 crc kubenswrapper[4758]: I0130 08:54:11.544631 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 08:54:12 crc kubenswrapper[4758]: I0130 08:54:12.553618 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:54:12 crc kubenswrapper[4758]: I0130 08:54:12.553907 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:54:13 crc kubenswrapper[4758]: I0130 08:54:13.564228 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd2d2fe7-5dac-4f3b-80f6-650712925495" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:54:13 crc kubenswrapper[4758]: I0130 08:54:13.564255 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd2d2fe7-5dac-4f3b-80f6-650712925495" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:54:14 crc kubenswrapper[4758]: I0130 08:54:14.444985 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 08:54:14 crc kubenswrapper[4758]: I0130 08:54:14.476034 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 08:54:15 crc kubenswrapper[4758]: I0130 08:54:15.182554 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 08:54:16 crc kubenswrapper[4758]: I0130 08:54:16.544770 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 08:54:16 crc kubenswrapper[4758]: I0130 08:54:16.545513 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 08:54:17 crc kubenswrapper[4758]: I0130 08:54:17.558231 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e4213c10-dde9-4a4d-9af9-304dd08f755c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:54:17 crc kubenswrapper[4758]: I0130 08:54:17.558242 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e4213c10-dde9-4a4d-9af9-304dd08f755c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:54:20 crc kubenswrapper[4758]: I0130 08:54:20.573632 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 08:54:22 crc kubenswrapper[4758]: I0130 08:54:22.389496 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:54:22 crc kubenswrapper[4758]: I0130 08:54:22.390161 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:54:22 crc kubenswrapper[4758]: I0130 08:54:22.566152 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 08:54:22 crc kubenswrapper[4758]: I0130 08:54:22.569604 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 08:54:22 crc kubenswrapper[4758]: I0130 08:54:22.570006 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 08:54:22 crc kubenswrapper[4758]: I0130 08:54:22.577079 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 08:54:23 crc kubenswrapper[4758]: I0130 08:54:23.236639 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 08:54:23 crc kubenswrapper[4758]: I0130 08:54:23.246343 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 08:54:23 crc kubenswrapper[4758]: E0130 08:54:23.980864 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="f978baf9-b7c0-4d25-8bca-e95a018ba2af" Jan 30 08:54:24 crc kubenswrapper[4758]: I0130 08:54:24.244406 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 08:54:26 crc kubenswrapper[4758]: I0130 08:54:26.119952 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:54:26 crc kubenswrapper[4758]: E0130 08:54:26.120175 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:54:26 crc kubenswrapper[4758]: E0130 08:54:26.120210 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:54:26 crc kubenswrapper[4758]: E0130 08:54:26.120276 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:56:28.120254503 +0000 UTC m=+1593.092566054 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:54:26 crc kubenswrapper[4758]: I0130 08:54:26.550204 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 08:54:26 crc kubenswrapper[4758]: I0130 08:54:26.550863 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 08:54:26 crc kubenswrapper[4758]: I0130 08:54:26.555221 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 08:54:27 crc kubenswrapper[4758]: I0130 08:54:27.272605 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.021246 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-ws6z7"] Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.023147 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.025924 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.031699 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.037580 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ws6z7"] Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.108563 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-ring-data-devices\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.108653 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-swiftconf\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.108753 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0a36303a-ea35-4a12-be39-906481ea247a-etc-swift\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.108807 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6fb6\" (UniqueName: \"kubernetes.io/projected/0a36303a-ea35-4a12-be39-906481ea247a-kube-api-access-m6fb6\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.108833 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-combined-ca-bundle\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.108856 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-dispersionconf\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.108920 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-scripts\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.210072 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0a36303a-ea35-4a12-be39-906481ea247a-etc-swift\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.210142 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6fb6\" (UniqueName: \"kubernetes.io/projected/0a36303a-ea35-4a12-be39-906481ea247a-kube-api-access-m6fb6\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.210169 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-combined-ca-bundle\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.210206 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-dispersionconf\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.210291 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-scripts\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.210342 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-ring-data-devices\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.210370 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-swiftconf\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.211567 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0a36303a-ea35-4a12-be39-906481ea247a-etc-swift\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.211904 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-ring-data-devices\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.212185 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-scripts\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.215670 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-swiftconf\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.216654 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-combined-ca-bundle\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.225762 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-dispersionconf\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.227166 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6fb6\" (UniqueName: \"kubernetes.io/projected/0a36303a-ea35-4a12-be39-906481ea247a-kube-api-access-m6fb6\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.352990 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-kt9hk" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.361194 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.829755 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ws6z7"] Jan 30 08:54:45 crc kubenswrapper[4758]: I0130 08:54:45.416527 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ws6z7" event={"ID":"0a36303a-ea35-4a12-be39-906481ea247a","Type":"ContainerStarted","Data":"9d3059fcb6e54b15e5a83e28a91ccb5fd27d766b3cd8091bb00825f4c3db1d9a"} Jan 30 08:54:48 crc kubenswrapper[4758]: I0130 08:54:48.448408 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ws6z7" event={"ID":"0a36303a-ea35-4a12-be39-906481ea247a","Type":"ContainerStarted","Data":"4977aaefd3e81d95488932e45781b698031c45e3d8e91e0cfdde1b5ff7bc8926"} Jan 30 08:54:48 crc kubenswrapper[4758]: I0130 08:54:48.472367 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-ws6z7" podStartSLOduration=2.2697492759999998 podStartE2EDuration="5.472346712s" podCreationTimestamp="2026-01-30 08:54:43 +0000 UTC" firstStartedPulling="2026-01-30 08:54:44.833243843 +0000 UTC m=+1489.805555414" lastFinishedPulling="2026-01-30 08:54:48.035841299 +0000 UTC m=+1493.008152850" observedRunningTime="2026-01-30 08:54:48.471173605 +0000 UTC m=+1493.443485166" watchObservedRunningTime="2026-01-30 08:54:48.472346712 +0000 UTC m=+1493.444658263" Jan 30 08:54:52 crc kubenswrapper[4758]: I0130 08:54:52.387580 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:54:52 crc kubenswrapper[4758]: I0130 08:54:52.387975 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:54:55 crc kubenswrapper[4758]: I0130 08:54:55.508223 4758 generic.go:334] "Generic (PLEG): container finished" podID="0a36303a-ea35-4a12-be39-906481ea247a" containerID="4977aaefd3e81d95488932e45781b698031c45e3d8e91e0cfdde1b5ff7bc8926" exitCode=0 Jan 30 08:54:55 crc kubenswrapper[4758]: I0130 08:54:55.508330 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ws6z7" event={"ID":"0a36303a-ea35-4a12-be39-906481ea247a","Type":"ContainerDied","Data":"4977aaefd3e81d95488932e45781b698031c45e3d8e91e0cfdde1b5ff7bc8926"} Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.855657 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.893350 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-scripts\") pod \"0a36303a-ea35-4a12-be39-906481ea247a\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.893406 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-ring-data-devices\") pod \"0a36303a-ea35-4a12-be39-906481ea247a\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.893451 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-swiftconf\") pod \"0a36303a-ea35-4a12-be39-906481ea247a\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.893512 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0a36303a-ea35-4a12-be39-906481ea247a-etc-swift\") pod \"0a36303a-ea35-4a12-be39-906481ea247a\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.893582 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6fb6\" (UniqueName: \"kubernetes.io/projected/0a36303a-ea35-4a12-be39-906481ea247a-kube-api-access-m6fb6\") pod \"0a36303a-ea35-4a12-be39-906481ea247a\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.893617 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-combined-ca-bundle\") pod \"0a36303a-ea35-4a12-be39-906481ea247a\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.893652 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-dispersionconf\") pod \"0a36303a-ea35-4a12-be39-906481ea247a\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.895064 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a36303a-ea35-4a12-be39-906481ea247a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0a36303a-ea35-4a12-be39-906481ea247a" (UID: "0a36303a-ea35-4a12-be39-906481ea247a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.895983 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "0a36303a-ea35-4a12-be39-906481ea247a" (UID: "0a36303a-ea35-4a12-be39-906481ea247a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.910469 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a36303a-ea35-4a12-be39-906481ea247a-kube-api-access-m6fb6" (OuterVolumeSpecName: "kube-api-access-m6fb6") pod "0a36303a-ea35-4a12-be39-906481ea247a" (UID: "0a36303a-ea35-4a12-be39-906481ea247a"). InnerVolumeSpecName "kube-api-access-m6fb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.920993 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-scripts" (OuterVolumeSpecName: "scripts") pod "0a36303a-ea35-4a12-be39-906481ea247a" (UID: "0a36303a-ea35-4a12-be39-906481ea247a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.929511 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a36303a-ea35-4a12-be39-906481ea247a" (UID: "0a36303a-ea35-4a12-be39-906481ea247a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.931972 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "0a36303a-ea35-4a12-be39-906481ea247a" (UID: "0a36303a-ea35-4a12-be39-906481ea247a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.933619 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "0a36303a-ea35-4a12-be39-906481ea247a" (UID: "0a36303a-ea35-4a12-be39-906481ea247a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.996311 4758 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.996691 4758 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0a36303a-ea35-4a12-be39-906481ea247a-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.996792 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6fb6\" (UniqueName: \"kubernetes.io/projected/0a36303a-ea35-4a12-be39-906481ea247a-kube-api-access-m6fb6\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.996854 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.996916 4758 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.996987 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.997074 4758 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:57 crc kubenswrapper[4758]: I0130 08:54:57.531290 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ws6z7" event={"ID":"0a36303a-ea35-4a12-be39-906481ea247a","Type":"ContainerDied","Data":"9d3059fcb6e54b15e5a83e28a91ccb5fd27d766b3cd8091bb00825f4c3db1d9a"} Jan 30 08:54:57 crc kubenswrapper[4758]: I0130 08:54:57.532239 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d3059fcb6e54b15e5a83e28a91ccb5fd27d766b3cd8091bb00825f4c3db1d9a" Jan 30 08:54:57 crc kubenswrapper[4758]: I0130 08:54:57.531355 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:55:03 crc kubenswrapper[4758]: E0130 08:55:03.991519 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-proxy-75f5775999-fhl5h" podUID="c2358e5c-db98-4b7b-8b6c-2e83132655a9" Jan 30 08:55:04 crc kubenswrapper[4758]: I0130 08:55:04.603478 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:55:05 crc kubenswrapper[4758]: I0130 08:55:05.507110 4758 scope.go:117] "RemoveContainer" containerID="44986833bc4a6d26e1f0961e4b7a2ef317d50d30ae747ee1fe73659d1eb85ff3" Jan 30 08:55:05 crc kubenswrapper[4758]: I0130 08:55:05.541092 4758 scope.go:117] "RemoveContainer" containerID="183015e47f5f83a8e3749301c9da8e5904ca92fc6fc6359bf8ab059b1e015b60" Jan 30 08:55:05 crc kubenswrapper[4758]: I0130 08:55:05.574073 4758 scope.go:117] "RemoveContainer" containerID="86c33d040e9741a4c414c369434d352975d29a3e661c9ee51459ded8965dad1b" Jan 30 08:55:07 crc kubenswrapper[4758]: I0130 08:55:07.905216 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:55:07 crc kubenswrapper[4758]: I0130 08:55:07.912329 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:55:08 crc kubenswrapper[4758]: I0130 08:55:08.205475 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:55:08 crc kubenswrapper[4758]: I0130 08:55:08.806940 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-75f5775999-fhl5h"] Jan 30 08:55:09 crc kubenswrapper[4758]: I0130 08:55:09.671988 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-75f5775999-fhl5h" event={"ID":"c2358e5c-db98-4b7b-8b6c-2e83132655a9","Type":"ContainerStarted","Data":"fe112a6cc4c50b51cd660638c4c0b8df89393f9072ed1e4696024d020708d35b"} Jan 30 08:55:09 crc kubenswrapper[4758]: I0130 08:55:09.673278 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:55:09 crc kubenswrapper[4758]: I0130 08:55:09.673392 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-75f5775999-fhl5h" event={"ID":"c2358e5c-db98-4b7b-8b6c-2e83132655a9","Type":"ContainerStarted","Data":"44dbc54e8ebc6a1c282b44819599e749c1733e58e7d017dd2ffdf2b5e9bc3c68"} Jan 30 08:55:09 crc kubenswrapper[4758]: I0130 08:55:09.673467 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:55:09 crc kubenswrapper[4758]: I0130 08:55:09.673543 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-75f5775999-fhl5h" event={"ID":"c2358e5c-db98-4b7b-8b6c-2e83132655a9","Type":"ContainerStarted","Data":"94c3d691a550fe60449faf4eb5a03c55c0201e65c22fa6cce45f06b892ee2d42"} Jan 30 08:55:09 crc kubenswrapper[4758]: I0130 08:55:09.699571 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-75f5775999-fhl5h" podStartSLOduration=252.699555245 podStartE2EDuration="4m12.699555245s" podCreationTimestamp="2026-01-30 08:50:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:55:09.698986697 +0000 UTC m=+1514.671298248" watchObservedRunningTime="2026-01-30 08:55:09.699555245 +0000 UTC m=+1514.671866796" Jan 30 08:55:18 crc kubenswrapper[4758]: I0130 08:55:18.211150 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:55:18 crc kubenswrapper[4758]: I0130 08:55:18.211746 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.387746 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.388301 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.388358 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.389114 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.389162 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" gracePeriod=600 Jan 30 08:55:22 crc kubenswrapper[4758]: E0130 08:55:22.532294 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.776056 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" exitCode=0 Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.776066 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275"} Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.776499 4758 scope.go:117] "RemoveContainer" containerID="b01f58947342357281d9de34ab8ce6d30a071f097fc6b75d76a38795a72373c2" Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.777053 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:55:22 crc kubenswrapper[4758]: E0130 08:55:22.777367 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:55:37 crc kubenswrapper[4758]: I0130 08:55:37.768980 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:55:37 crc kubenswrapper[4758]: E0130 08:55:37.770031 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.373437 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d5644"] Jan 30 08:55:43 crc kubenswrapper[4758]: E0130 08:55:43.375254 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a36303a-ea35-4a12-be39-906481ea247a" containerName="swift-ring-rebalance" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.375351 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a36303a-ea35-4a12-be39-906481ea247a" containerName="swift-ring-rebalance" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.375585 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a36303a-ea35-4a12-be39-906481ea247a" containerName="swift-ring-rebalance" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.377959 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.399246 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5644"] Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.440183 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-utilities\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.440249 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-catalog-content\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.440735 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrmvq\" (UniqueName: \"kubernetes.io/projected/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-kube-api-access-vrmvq\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.542951 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrmvq\" (UniqueName: \"kubernetes.io/projected/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-kube-api-access-vrmvq\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.543478 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-utilities\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.543505 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-catalog-content\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.544198 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-utilities\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.544235 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-catalog-content\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.572269 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrmvq\" (UniqueName: \"kubernetes.io/projected/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-kube-api-access-vrmvq\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.704614 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:44 crc kubenswrapper[4758]: I0130 08:55:44.347776 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5644"] Jan 30 08:55:44 crc kubenswrapper[4758]: I0130 08:55:44.978817 4758 generic.go:334] "Generic (PLEG): container finished" podID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerID="932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4" exitCode=0 Jan 30 08:55:44 crc kubenswrapper[4758]: I0130 08:55:44.978933 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5644" event={"ID":"9d4df89a-d459-4dc8-a19c-3839e9c47a8a","Type":"ContainerDied","Data":"932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4"} Jan 30 08:55:44 crc kubenswrapper[4758]: I0130 08:55:44.979357 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5644" event={"ID":"9d4df89a-d459-4dc8-a19c-3839e9c47a8a","Type":"ContainerStarted","Data":"e5d2968e6be1c8730a9024950dbf5f3e8b53eba5873867bdf9a0d6e741cb3717"} Jan 30 08:55:46 crc kubenswrapper[4758]: I0130 08:55:46.999197 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5644" event={"ID":"9d4df89a-d459-4dc8-a19c-3839e9c47a8a","Type":"ContainerStarted","Data":"faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6"} Jan 30 08:55:49 crc kubenswrapper[4758]: I0130 08:55:49.019315 4758 generic.go:334] "Generic (PLEG): container finished" podID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerID="faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6" exitCode=0 Jan 30 08:55:49 crc kubenswrapper[4758]: I0130 08:55:49.019376 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5644" event={"ID":"9d4df89a-d459-4dc8-a19c-3839e9c47a8a","Type":"ContainerDied","Data":"faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6"} Jan 30 08:55:50 crc kubenswrapper[4758]: I0130 08:55:50.034400 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5644" event={"ID":"9d4df89a-d459-4dc8-a19c-3839e9c47a8a","Type":"ContainerStarted","Data":"cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13"} Jan 30 08:55:50 crc kubenswrapper[4758]: I0130 08:55:50.068790 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d5644" podStartSLOduration=2.267292336 podStartE2EDuration="7.0687504s" podCreationTimestamp="2026-01-30 08:55:43 +0000 UTC" firstStartedPulling="2026-01-30 08:55:44.982789834 +0000 UTC m=+1549.955101385" lastFinishedPulling="2026-01-30 08:55:49.784247898 +0000 UTC m=+1554.756559449" observedRunningTime="2026-01-30 08:55:50.057081274 +0000 UTC m=+1555.029392835" watchObservedRunningTime="2026-01-30 08:55:50.0687504 +0000 UTC m=+1555.041061961" Jan 30 08:55:52 crc kubenswrapper[4758]: I0130 08:55:52.768576 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:55:52 crc kubenswrapper[4758]: E0130 08:55:52.769609 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:55:53 crc kubenswrapper[4758]: I0130 08:55:53.705394 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:53 crc kubenswrapper[4758]: I0130 08:55:53.705823 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:54 crc kubenswrapper[4758]: I0130 08:55:54.756793 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-d5644" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="registry-server" probeResult="failure" output=< Jan 30 08:55:54 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 08:55:54 crc kubenswrapper[4758]: > Jan 30 08:56:03 crc kubenswrapper[4758]: I0130 08:56:03.751159 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d5644" Jan 30 08:56:03 crc kubenswrapper[4758]: I0130 08:56:03.769930 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:56:03 crc kubenswrapper[4758]: E0130 08:56:03.770583 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:56:03 crc kubenswrapper[4758]: I0130 08:56:03.808655 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d5644" Jan 30 08:56:03 crc kubenswrapper[4758]: I0130 08:56:03.993909 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5644"] Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.180364 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d5644" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="registry-server" containerID="cri-o://cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13" gracePeriod=2 Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.679980 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5644" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.711241 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-utilities\") pod \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.711391 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrmvq\" (UniqueName: \"kubernetes.io/projected/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-kube-api-access-vrmvq\") pod \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.711418 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-catalog-content\") pod \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.712317 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-utilities" (OuterVolumeSpecName: "utilities") pod "9d4df89a-d459-4dc8-a19c-3839e9c47a8a" (UID: "9d4df89a-d459-4dc8-a19c-3839e9c47a8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.722976 4758 scope.go:117] "RemoveContainer" containerID="2c3a333cae2d6b2084a8ea4de5cb47b1fe487040458de9660811e92c533cb616" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.723454 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-kube-api-access-vrmvq" (OuterVolumeSpecName: "kube-api-access-vrmvq") pod "9d4df89a-d459-4dc8-a19c-3839e9c47a8a" (UID: "9d4df89a-d459-4dc8-a19c-3839e9c47a8a"). InnerVolumeSpecName "kube-api-access-vrmvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.770422 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d4df89a-d459-4dc8-a19c-3839e9c47a8a" (UID: "9d4df89a-d459-4dc8-a19c-3839e9c47a8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.799398 4758 scope.go:117] "RemoveContainer" containerID="133e57354834ba4048e5d9ae39382e69423ed21832e9edcfe49d508cca9e97e3" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.816167 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrmvq\" (UniqueName: \"kubernetes.io/projected/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-kube-api-access-vrmvq\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.816421 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.816524 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.837490 4758 scope.go:117] "RemoveContainer" containerID="c5be373def2d5d6ba0348da3d6e663da2b77ce86d4b3d39d71e5ba9a890af4be" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.190695 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5644" event={"ID":"9d4df89a-d459-4dc8-a19c-3839e9c47a8a","Type":"ContainerDied","Data":"cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13"} Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.191029 4758 scope.go:117] "RemoveContainer" containerID="cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.190584 4758 generic.go:334] "Generic (PLEG): container finished" podID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerID="cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13" exitCode=0 Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.191225 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5644" event={"ID":"9d4df89a-d459-4dc8-a19c-3839e9c47a8a","Type":"ContainerDied","Data":"e5d2968e6be1c8730a9024950dbf5f3e8b53eba5873867bdf9a0d6e741cb3717"} Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.190716 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5644" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.213700 4758 scope.go:117] "RemoveContainer" containerID="faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.221194 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5644"] Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.232214 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d5644"] Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.243529 4758 scope.go:117] "RemoveContainer" containerID="932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.261904 4758 scope.go:117] "RemoveContainer" containerID="cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13" Jan 30 08:56:06 crc kubenswrapper[4758]: E0130 08:56:06.262683 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13\": container with ID starting with cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13 not found: ID does not exist" containerID="cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.262793 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13"} err="failed to get container status \"cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13\": rpc error: code = NotFound desc = could not find container \"cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13\": container with ID starting with cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13 not found: ID does not exist" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.262880 4758 scope.go:117] "RemoveContainer" containerID="faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6" Jan 30 08:56:06 crc kubenswrapper[4758]: E0130 08:56:06.263315 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6\": container with ID starting with faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6 not found: ID does not exist" containerID="faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.263404 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6"} err="failed to get container status \"faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6\": rpc error: code = NotFound desc = could not find container \"faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6\": container with ID starting with faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6 not found: ID does not exist" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.263478 4758 scope.go:117] "RemoveContainer" containerID="932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4" Jan 30 08:56:06 crc kubenswrapper[4758]: E0130 08:56:06.263815 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4\": container with ID starting with 932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4 not found: ID does not exist" containerID="932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.263857 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4"} err="failed to get container status \"932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4\": rpc error: code = NotFound desc = could not find container \"932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4\": container with ID starting with 932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4 not found: ID does not exist" Jan 30 08:56:07 crc kubenswrapper[4758]: I0130 08:56:07.778782 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" path="/var/lib/kubelet/pods/9d4df89a-d459-4dc8-a19c-3839e9c47a8a/volumes" Jan 30 08:56:18 crc kubenswrapper[4758]: I0130 08:56:18.769350 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:56:18 crc kubenswrapper[4758]: E0130 08:56:18.770687 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.867641 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4k5bg"] Jan 30 08:56:22 crc kubenswrapper[4758]: E0130 08:56:22.868668 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="registry-server" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.868686 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="registry-server" Jan 30 08:56:22 crc kubenswrapper[4758]: E0130 08:56:22.868698 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="extract-utilities" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.868706 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="extract-utilities" Jan 30 08:56:22 crc kubenswrapper[4758]: E0130 08:56:22.868741 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="extract-content" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.868750 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="extract-content" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.869013 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="registry-server" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.870715 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.889792 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4k5bg"] Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.958732 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-utilities\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.958820 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-catalog-content\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.958901 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwnsh\" (UniqueName: \"kubernetes.io/projected/7f689102-018e-4401-9a71-d2066434a1b1-kube-api-access-kwnsh\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:23 crc kubenswrapper[4758]: I0130 08:56:23.060909 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwnsh\" (UniqueName: \"kubernetes.io/projected/7f689102-018e-4401-9a71-d2066434a1b1-kube-api-access-kwnsh\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:23 crc kubenswrapper[4758]: I0130 08:56:23.061078 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-utilities\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:23 crc kubenswrapper[4758]: I0130 08:56:23.061165 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-catalog-content\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:23 crc kubenswrapper[4758]: I0130 08:56:23.061725 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-catalog-content\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:23 crc kubenswrapper[4758]: I0130 08:56:23.061837 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-utilities\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:23 crc kubenswrapper[4758]: I0130 08:56:23.088994 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwnsh\" (UniqueName: \"kubernetes.io/projected/7f689102-018e-4401-9a71-d2066434a1b1-kube-api-access-kwnsh\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:23 crc kubenswrapper[4758]: I0130 08:56:23.197355 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:23 crc kubenswrapper[4758]: I0130 08:56:23.818158 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4k5bg"] Jan 30 08:56:24 crc kubenswrapper[4758]: I0130 08:56:24.390075 4758 generic.go:334] "Generic (PLEG): container finished" podID="7f689102-018e-4401-9a71-d2066434a1b1" containerID="d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed" exitCode=0 Jan 30 08:56:24 crc kubenswrapper[4758]: I0130 08:56:24.390105 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5bg" event={"ID":"7f689102-018e-4401-9a71-d2066434a1b1","Type":"ContainerDied","Data":"d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed"} Jan 30 08:56:24 crc kubenswrapper[4758]: I0130 08:56:24.390141 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5bg" event={"ID":"7f689102-018e-4401-9a71-d2066434a1b1","Type":"ContainerStarted","Data":"d3680b93a3f93bc0676cf4233306ad55bc57d711b4f34aed9556d725dac61a80"} Jan 30 08:56:26 crc kubenswrapper[4758]: I0130 08:56:26.407252 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5bg" event={"ID":"7f689102-018e-4401-9a71-d2066434a1b1","Type":"ContainerStarted","Data":"12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723"} Jan 30 08:56:27 crc kubenswrapper[4758]: E0130 08:56:27.246017 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="f978baf9-b7c0-4d25-8bca-e95a018ba2af" Jan 30 08:56:27 crc kubenswrapper[4758]: I0130 08:56:27.418292 4758 generic.go:334] "Generic (PLEG): container finished" podID="7f689102-018e-4401-9a71-d2066434a1b1" containerID="12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723" exitCode=0 Jan 30 08:56:27 crc kubenswrapper[4758]: I0130 08:56:27.418997 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 08:56:27 crc kubenswrapper[4758]: I0130 08:56:27.420636 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5bg" event={"ID":"7f689102-018e-4401-9a71-d2066434a1b1","Type":"ContainerDied","Data":"12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723"} Jan 30 08:56:28 crc kubenswrapper[4758]: I0130 08:56:28.177522 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:56:28 crc kubenswrapper[4758]: I0130 08:56:28.195597 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:56:28 crc kubenswrapper[4758]: I0130 08:56:28.321242 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 08:56:28 crc kubenswrapper[4758]: I0130 08:56:28.455133 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5bg" event={"ID":"7f689102-018e-4401-9a71-d2066434a1b1","Type":"ContainerStarted","Data":"f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4"} Jan 30 08:56:28 crc kubenswrapper[4758]: I0130 08:56:28.514475 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4k5bg" podStartSLOduration=3.014682733 podStartE2EDuration="6.514450781s" podCreationTimestamp="2026-01-30 08:56:22 +0000 UTC" firstStartedPulling="2026-01-30 08:56:24.391906835 +0000 UTC m=+1589.364218386" lastFinishedPulling="2026-01-30 08:56:27.891674883 +0000 UTC m=+1592.863986434" observedRunningTime="2026-01-30 08:56:28.501781434 +0000 UTC m=+1593.474092985" watchObservedRunningTime="2026-01-30 08:56:28.514450781 +0000 UTC m=+1593.486762342" Jan 30 08:56:29 crc kubenswrapper[4758]: I0130 08:56:29.773984 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:56:29 crc kubenswrapper[4758]: E0130 08:56:29.774742 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:56:29 crc kubenswrapper[4758]: I0130 08:56:29.879498 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 08:56:30 crc kubenswrapper[4758]: I0130 08:56:30.478085 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"8703f1e9e8928434c98d4c6305d5a4ec1a5b75cf736b3eed9891cf1dbbf64aee"} Jan 30 08:56:31 crc kubenswrapper[4758]: I0130 08:56:31.490874 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"5ee4528374b12e4014e8b0ebffc8c445fd01cea5224d400e8f5fd76b2d763154"} Jan 30 08:56:32 crc kubenswrapper[4758]: I0130 08:56:32.510793 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"8e4139287ca583faa197b823336a3caf472fda910fb49db2b67f76707b1ab68c"} Jan 30 08:56:32 crc kubenswrapper[4758]: I0130 08:56:32.511235 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"db0950963ea6ed2a744ce1e483a1cb1cf3e0a85d7e27f7d9f80ee8c1422595bd"} Jan 30 08:56:32 crc kubenswrapper[4758]: I0130 08:56:32.511246 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"bf91cdfae8594acd2118eb0120f7c7640e92562bfbec39985d4919b429f6f40f"} Jan 30 08:56:33 crc kubenswrapper[4758]: I0130 08:56:33.198391 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:33 crc kubenswrapper[4758]: I0130 08:56:33.198874 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:33 crc kubenswrapper[4758]: I0130 08:56:33.577242 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"30f6ee3f2270007395ace89a2d9dd21e6d303244a85417e984e7d3f7e7b39f47"} Jan 30 08:56:34 crc kubenswrapper[4758]: I0130 08:56:34.326364 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4k5bg" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="registry-server" probeResult="failure" output=< Jan 30 08:56:34 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 08:56:34 crc kubenswrapper[4758]: > Jan 30 08:56:34 crc kubenswrapper[4758]: I0130 08:56:34.592029 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"30ce0ad959367cccac320ed766b5baed08467ed4760da1418aed64f0411fc753"} Jan 30 08:56:34 crc kubenswrapper[4758]: I0130 08:56:34.592156 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"fb55e9300448c5a44afa0c1fd1f333132d6c9589c02c72240d2cdd6fb37b7254"} Jan 30 08:56:34 crc kubenswrapper[4758]: I0130 08:56:34.592169 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"fae9c323ae3b7f5a016d334b00edb0a9deb2b042da2a81d063615e2004662bf1"} Jan 30 08:56:36 crc kubenswrapper[4758]: I0130 08:56:36.619608 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"5d40926bc99838128d993cee222a9d9bc5a4e793f7fa6302bf1d9d4c67af178f"} Jan 30 08:56:36 crc kubenswrapper[4758]: I0130 08:56:36.620307 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"4888d37ae86ea83d2b54db692d59e01b14e365ebabf18a9dda7b1778184c04e5"} Jan 30 08:56:36 crc kubenswrapper[4758]: I0130 08:56:36.620323 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"4efeb2a6e7665ad7862a983eb539aa1aa71f10510c7e5ab5517537601fed784d"} Jan 30 08:56:37 crc kubenswrapper[4758]: I0130 08:56:37.681566 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"e6673ae2446935eefe6357be07bc24bba88a18cef45ece23731861c73ca91739"} Jan 30 08:56:37 crc kubenswrapper[4758]: I0130 08:56:37.681928 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"6111c543313b1bac1b03d6d1e86886bb801aeb349fc5e27dff1a34fa381b3628"} Jan 30 08:56:37 crc kubenswrapper[4758]: I0130 08:56:37.681948 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"390b1861bb92e08bbc3f8bb823cd442b17f5ad4a0df2559d89540b8c341bbdeb"} Jan 30 08:56:38 crc kubenswrapper[4758]: I0130 08:56:38.699126 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"f512255a0c0c8def3a189698c8082c92e06dfea7350e8de8021ff589db4a383d"} Jan 30 08:56:38 crc kubenswrapper[4758]: I0130 08:56:38.755064 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=500.723413615 podStartE2EDuration="8m26.755027332s" podCreationTimestamp="2026-01-30 08:48:12 +0000 UTC" firstStartedPulling="2026-01-30 08:56:29.894867312 +0000 UTC m=+1594.867178873" lastFinishedPulling="2026-01-30 08:56:35.926481039 +0000 UTC m=+1600.898792590" observedRunningTime="2026-01-30 08:56:38.746573817 +0000 UTC m=+1603.718885398" watchObservedRunningTime="2026-01-30 08:56:38.755027332 +0000 UTC m=+1603.727338883" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.109712 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c8fb88b59-pmngp"] Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.112447 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.116241 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.125957 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c8fb88b59-pmngp"] Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.245141 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-swift-storage-0\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.245230 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-nb\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.245490 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-svc\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.245562 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-sb\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.245948 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-config\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.246223 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7xsj\" (UniqueName: \"kubernetes.io/projected/846a15fd-f202-4eaf-b346-b66916daa7d1-kube-api-access-r7xsj\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.348270 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-swift-storage-0\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.348341 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-nb\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.348396 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-svc\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.348459 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-sb\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.349786 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-svc\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.349800 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-swift-storage-0\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.349860 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-config\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.349806 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-nb\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.349986 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7xsj\" (UniqueName: \"kubernetes.io/projected/846a15fd-f202-4eaf-b346-b66916daa7d1-kube-api-access-r7xsj\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.350146 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-sb\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.350514 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-config\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.377349 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7xsj\" (UniqueName: \"kubernetes.io/projected/846a15fd-f202-4eaf-b346-b66916daa7d1-kube-api-access-r7xsj\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.434249 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:40 crc kubenswrapper[4758]: I0130 08:56:40.080565 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c8fb88b59-pmngp"] Jan 30 08:56:40 crc kubenswrapper[4758]: I0130 08:56:40.723706 4758 generic.go:334] "Generic (PLEG): container finished" podID="846a15fd-f202-4eaf-b346-b66916daa7d1" containerID="a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb" exitCode=0 Jan 30 08:56:40 crc kubenswrapper[4758]: I0130 08:56:40.723806 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" event={"ID":"846a15fd-f202-4eaf-b346-b66916daa7d1","Type":"ContainerDied","Data":"a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb"} Jan 30 08:56:40 crc kubenswrapper[4758]: I0130 08:56:40.724071 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" event={"ID":"846a15fd-f202-4eaf-b346-b66916daa7d1","Type":"ContainerStarted","Data":"a0aeac30b993bcb0e0326076acc7ca6b11ca6918894e3364821c28d868e8c055"} Jan 30 08:56:41 crc kubenswrapper[4758]: I0130 08:56:41.736208 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" event={"ID":"846a15fd-f202-4eaf-b346-b66916daa7d1","Type":"ContainerStarted","Data":"38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0"} Jan 30 08:56:41 crc kubenswrapper[4758]: I0130 08:56:41.736588 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:41 crc kubenswrapper[4758]: I0130 08:56:41.762602 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" podStartSLOduration=2.762582173 podStartE2EDuration="2.762582173s" podCreationTimestamp="2026-01-30 08:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:56:41.75548216 +0000 UTC m=+1606.727793731" watchObservedRunningTime="2026-01-30 08:56:41.762582173 +0000 UTC m=+1606.734893724" Jan 30 08:56:43 crc kubenswrapper[4758]: I0130 08:56:43.245178 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:43 crc kubenswrapper[4758]: I0130 08:56:43.297949 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:43 crc kubenswrapper[4758]: I0130 08:56:43.484193 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4k5bg"] Jan 30 08:56:44 crc kubenswrapper[4758]: I0130 08:56:44.762269 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4k5bg" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="registry-server" containerID="cri-o://f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4" gracePeriod=2 Jan 30 08:56:44 crc kubenswrapper[4758]: I0130 08:56:44.769074 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:56:44 crc kubenswrapper[4758]: E0130 08:56:44.769505 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.233988 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.288104 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-utilities\") pod \"7f689102-018e-4401-9a71-d2066434a1b1\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.288296 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-catalog-content\") pod \"7f689102-018e-4401-9a71-d2066434a1b1\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.288388 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwnsh\" (UniqueName: \"kubernetes.io/projected/7f689102-018e-4401-9a71-d2066434a1b1-kube-api-access-kwnsh\") pod \"7f689102-018e-4401-9a71-d2066434a1b1\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.288802 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-utilities" (OuterVolumeSpecName: "utilities") pod "7f689102-018e-4401-9a71-d2066434a1b1" (UID: "7f689102-018e-4401-9a71-d2066434a1b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.295358 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f689102-018e-4401-9a71-d2066434a1b1-kube-api-access-kwnsh" (OuterVolumeSpecName: "kube-api-access-kwnsh") pod "7f689102-018e-4401-9a71-d2066434a1b1" (UID: "7f689102-018e-4401-9a71-d2066434a1b1"). InnerVolumeSpecName "kube-api-access-kwnsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.391170 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.391218 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwnsh\" (UniqueName: \"kubernetes.io/projected/7f689102-018e-4401-9a71-d2066434a1b1-kube-api-access-kwnsh\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.399670 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f689102-018e-4401-9a71-d2066434a1b1" (UID: "7f689102-018e-4401-9a71-d2066434a1b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.496426 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.775679 4758 generic.go:334] "Generic (PLEG): container finished" podID="7f689102-018e-4401-9a71-d2066434a1b1" containerID="f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4" exitCode=0 Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.775775 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.798371 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5bg" event={"ID":"7f689102-018e-4401-9a71-d2066434a1b1","Type":"ContainerDied","Data":"f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4"} Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.798431 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5bg" event={"ID":"7f689102-018e-4401-9a71-d2066434a1b1","Type":"ContainerDied","Data":"d3680b93a3f93bc0676cf4233306ad55bc57d711b4f34aed9556d725dac61a80"} Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.798456 4758 scope.go:117] "RemoveContainer" containerID="f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.843123 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4k5bg"] Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.850704 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4k5bg"] Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.863919 4758 scope.go:117] "RemoveContainer" containerID="12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.916475 4758 scope.go:117] "RemoveContainer" containerID="d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.963175 4758 scope.go:117] "RemoveContainer" containerID="f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4" Jan 30 08:56:45 crc kubenswrapper[4758]: E0130 08:56:45.963762 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4\": container with ID starting with f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4 not found: ID does not exist" containerID="f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.963801 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4"} err="failed to get container status \"f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4\": rpc error: code = NotFound desc = could not find container \"f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4\": container with ID starting with f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4 not found: ID does not exist" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.963829 4758 scope.go:117] "RemoveContainer" containerID="12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723" Jan 30 08:56:45 crc kubenswrapper[4758]: E0130 08:56:45.964281 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723\": container with ID starting with 12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723 not found: ID does not exist" containerID="12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.964300 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723"} err="failed to get container status \"12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723\": rpc error: code = NotFound desc = could not find container \"12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723\": container with ID starting with 12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723 not found: ID does not exist" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.964312 4758 scope.go:117] "RemoveContainer" containerID="d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed" Jan 30 08:56:45 crc kubenswrapper[4758]: E0130 08:56:45.964593 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed\": container with ID starting with d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed not found: ID does not exist" containerID="d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.964617 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed"} err="failed to get container status \"d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed\": rpc error: code = NotFound desc = could not find container \"d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed\": container with ID starting with d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed not found: ID does not exist" Jan 30 08:56:47 crc kubenswrapper[4758]: I0130 08:56:47.778655 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f689102-018e-4401-9a71-d2066434a1b1" path="/var/lib/kubelet/pods/7f689102-018e-4401-9a71-d2066434a1b1/volumes" Jan 30 08:56:49 crc kubenswrapper[4758]: I0130 08:56:49.435250 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:49 crc kubenswrapper[4758]: I0130 08:56:49.531523 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5459cb87c-dlx4d"] Jan 30 08:56:49 crc kubenswrapper[4758]: I0130 08:56:49.531940 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" podUID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" containerName="dnsmasq-dns" containerID="cri-o://6e12455a439f0f64e5942ea1c92f66b043c960a09d5ba6c660cf45634c75cc34" gracePeriod=10 Jan 30 08:56:49 crc kubenswrapper[4758]: I0130 08:56:49.833676 4758 generic.go:334] "Generic (PLEG): container finished" podID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" containerID="6e12455a439f0f64e5942ea1c92f66b043c960a09d5ba6c660cf45634c75cc34" exitCode=0 Jan 30 08:56:49 crc kubenswrapper[4758]: I0130 08:56:49.833730 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" event={"ID":"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe","Type":"ContainerDied","Data":"6e12455a439f0f64e5942ea1c92f66b043c960a09d5ba6c660cf45634c75cc34"} Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.105972 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.225880 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlpl6\" (UniqueName: \"kubernetes.io/projected/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-kube-api-access-jlpl6\") pod \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.225935 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-sb\") pod \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.225993 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-dns-svc\") pod \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.226097 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-nb\") pod \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.226173 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-config\") pod \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.233229 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-kube-api-access-jlpl6" (OuterVolumeSpecName: "kube-api-access-jlpl6") pod "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" (UID: "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe"). InnerVolumeSpecName "kube-api-access-jlpl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.326448 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" (UID: "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.329363 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlpl6\" (UniqueName: \"kubernetes.io/projected/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-kube-api-access-jlpl6\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.329412 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.337544 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-config" (OuterVolumeSpecName: "config") pod "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" (UID: "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.347471 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" (UID: "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.366874 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" (UID: "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.432310 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.432355 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.432369 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.845668 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" event={"ID":"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe","Type":"ContainerDied","Data":"ee6038add08fd771bda22b40e423fff940fc9b5d99566b28b9e2c028c0c8fa85"} Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.846111 4758 scope.go:117] "RemoveContainer" containerID="6e12455a439f0f64e5942ea1c92f66b043c960a09d5ba6c660cf45634c75cc34" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.845732 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.868474 4758 scope.go:117] "RemoveContainer" containerID="c6c147de1a4a6872b323ab195d60eaec8350ddff6728805d455fbf3cd9db1508" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.890449 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5459cb87c-dlx4d"] Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.907854 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5459cb87c-dlx4d"] Jan 30 08:56:51 crc kubenswrapper[4758]: I0130 08:56:51.778780 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" path="/var/lib/kubelet/pods/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe/volumes" Jan 30 08:56:55 crc kubenswrapper[4758]: I0130 08:56:55.776103 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:56:55 crc kubenswrapper[4758]: E0130 08:56:55.778856 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:56:57 crc kubenswrapper[4758]: I0130 08:56:57.786806 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:56:59 crc kubenswrapper[4758]: I0130 08:56:59.741212 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:57:03 crc kubenswrapper[4758]: I0130 08:57:03.448863 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerName="rabbitmq" containerID="cri-o://dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba" gracePeriod=604795 Jan 30 08:57:04 crc kubenswrapper[4758]: I0130 08:57:04.463587 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerName="rabbitmq" containerID="cri-o://cb775ec2ee99ffed411c83db7c8c8f39801fd5654096db2c76a7443041b48ca9" gracePeriod=604796 Jan 30 08:57:05 crc kubenswrapper[4758]: I0130 08:57:05.903835 4758 scope.go:117] "RemoveContainer" containerID="1929e27981ebbdc0f15097ba4c6e8187f43c6e486ed49a85e5cf2d09718e26d7" Jan 30 08:57:05 crc kubenswrapper[4758]: I0130 08:57:05.935929 4758 scope.go:117] "RemoveContainer" containerID="056805156e46cbccce9c026e2d720a807e1f2530c8afc8d6afbacc4cb099539b" Jan 30 08:57:08 crc kubenswrapper[4758]: I0130 08:57:08.103158 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 30 08:57:08 crc kubenswrapper[4758]: I0130 08:57:08.256295 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.000206 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.020984 4758 generic.go:334] "Generic (PLEG): container finished" podID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerID="dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba" exitCode=0 Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.021055 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89ff2fc5-609f-4ca7-b997-9f8adfa5a221","Type":"ContainerDied","Data":"dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba"} Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.021092 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89ff2fc5-609f-4ca7-b997-9f8adfa5a221","Type":"ContainerDied","Data":"8f4809dc1b13b9e08c7692a160b88862bacf3c8f0bf775e5a671d7ecfeb7b0f7"} Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.021120 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.021133 4758 scope.go:117] "RemoveContainer" containerID="dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.043966 4758 scope.go:117] "RemoveContainer" containerID="80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.083146 4758 scope.go:117] "RemoveContainer" containerID="dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.094478 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba\": container with ID starting with dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba not found: ID does not exist" containerID="dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.094529 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba"} err="failed to get container status \"dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba\": rpc error: code = NotFound desc = could not find container \"dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba\": container with ID starting with dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba not found: ID does not exist" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.094555 4758 scope.go:117] "RemoveContainer" containerID="80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.096104 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293\": container with ID starting with 80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293 not found: ID does not exist" containerID="80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.096182 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293"} err="failed to get container status \"80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293\": rpc error: code = NotFound desc = could not find container \"80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293\": container with ID starting with 80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293 not found: ID does not exist" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.109896 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-tls\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.109960 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-erlang-cookie\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110019 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-plugins\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110042 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-erlang-cookie-secret\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110151 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-config-data\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110176 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-confd\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110198 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p297\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-kube-api-access-7p297\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110276 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-plugins-conf\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110300 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-server-conf\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110374 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-pod-info\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110390 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.112565 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.114604 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.141865 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.143197 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.150165 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-kube-api-access-7p297" (OuterVolumeSpecName: "kube-api-access-7p297") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "kube-api-access-7p297". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.154766 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.187601 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.189301 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-pod-info" (OuterVolumeSpecName: "pod-info") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.214505 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.214533 4758 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.214543 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p297\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-kube-api-access-7p297\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.214554 4758 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.214563 4758 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.214585 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.214594 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.214603 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.251257 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-server-conf" (OuterVolumeSpecName: "server-conf") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.255146 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-config-data" (OuterVolumeSpecName: "config-data") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.285260 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.292993 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.318318 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.318356 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.318368 4758 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.318377 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.414190 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.437218 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.456275 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.456787 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" containerName="init" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.456815 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" containerName="init" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.456829 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="extract-utilities" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.456838 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="extract-utilities" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.456861 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="registry-server" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.456867 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="registry-server" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.456887 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerName="setup-container" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.456895 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerName="setup-container" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.456903 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="extract-content" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.456909 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="extract-content" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.456925 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" containerName="dnsmasq-dns" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.456931 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" containerName="dnsmasq-dns" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.456943 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerName="rabbitmq" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.456948 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerName="rabbitmq" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.457153 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="registry-server" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.457173 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerName="rabbitmq" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.457185 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" containerName="dnsmasq-dns" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.458205 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.467718 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.467982 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-8zbw4" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.468225 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.468270 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.468496 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.468615 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.468632 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.493142 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.632809 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.632900 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c72311ae-5d7e-4978-a690-a9bee0b3672b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.632925 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.632944 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.632973 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.633147 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-config-data\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.633216 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.633265 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpptg\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-kube-api-access-cpptg\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.633374 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.633400 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c72311ae-5d7e-4978-a690-a9bee0b3672b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.633417 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735522 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735579 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-config-data\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735619 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735689 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpptg\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-kube-api-access-cpptg\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735743 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735769 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c72311ae-5d7e-4978-a690-a9bee0b3672b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735792 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735897 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735957 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c72311ae-5d7e-4978-a690-a9bee0b3672b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735986 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.736018 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.736514 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.737005 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.737008 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-config-data\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.737379 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.737754 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.737760 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.741758 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c72311ae-5d7e-4978-a690-a9bee0b3672b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.759103 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.759344 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.763261 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c72311ae-5d7e-4978-a690-a9bee0b3672b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.766632 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpptg\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-kube-api-access-cpptg\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.770027 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.770395 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.830464 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.010882 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5974d6465c-97ckq"] Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.012902 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.017256 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.018634 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5974d6465c-97ckq"] Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.042752 4758 generic.go:334] "Generic (PLEG): container finished" podID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerID="cb775ec2ee99ffed411c83db7c8c8f39801fd5654096db2c76a7443041b48ca9" exitCode=0 Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.042789 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5","Type":"ContainerDied","Data":"cb775ec2ee99ffed411c83db7c8c8f39801fd5654096db2c76a7443041b48ca9"} Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.069039 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.069093 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-swift-storage-0\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.071566 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-openstack-edpm-ipam\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.071619 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-svc\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.071840 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-config\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.071925 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-sb\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.071992 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-886xf\" (UniqueName: \"kubernetes.io/projected/f0bdec68-2725-4515-b6f0-dc421cec4a6d-kube-api-access-886xf\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.093312 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.179230 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.180257 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-swift-storage-0\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.180623 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-openstack-edpm-ipam\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.180661 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-svc\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.180700 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-config\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.180740 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-sb\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.180767 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-886xf\" (UniqueName: \"kubernetes.io/projected/f0bdec68-2725-4515-b6f0-dc421cec4a6d-kube-api-access-886xf\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.180193 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.181787 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-swift-storage-0\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.182353 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-openstack-edpm-ipam\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.182875 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-svc\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.189070 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-sb\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.194195 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-config\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.220006 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-886xf\" (UniqueName: \"kubernetes.io/projected/f0bdec68-2725-4515-b6f0-dc421cec4a6d-kube-api-access-886xf\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.228497 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387010 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-confd\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387068 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-server-conf\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387088 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387127 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-plugins-conf\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387238 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-pod-info\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387329 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xslvz\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-kube-api-access-xslvz\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387362 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-tls\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387388 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-plugins\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387423 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-erlang-cookie-secret\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387446 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-config-data\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387532 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-erlang-cookie\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.390349 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.390637 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.396010 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.402082 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.402166 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.402621 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.409172 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.415232 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-kube-api-access-xslvz" (OuterVolumeSpecName: "kube-api-access-xslvz") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "kube-api-access-xslvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.417033 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-pod-info" (OuterVolumeSpecName: "pod-info") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.490392 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.491042 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.491158 4758 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.491210 4758 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.491270 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xslvz\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-kube-api-access-xslvz\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.491335 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.491385 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.491435 4758 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.541835 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-config-data" (OuterVolumeSpecName: "config-data") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.557698 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-server-conf" (OuterVolumeSpecName: "server-conf") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.593337 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.593389 4758 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.612345 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.696811 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.700311 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.792859 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" path="/var/lib/kubelet/pods/89ff2fc5-609f-4ca7-b997-9f8adfa5a221/volumes" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.800948 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.845866 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.052966 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c72311ae-5d7e-4978-a690-a9bee0b3672b","Type":"ContainerStarted","Data":"c3e7defacd392862d3dfefe46607e7c14ff9fc9486f9a510a8c91c79b27f281e"} Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.055334 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5","Type":"ContainerDied","Data":"2885eef36f4c15f0a02d92953e9ac8c27214898712fb29c24b72fc2ee76d019d"} Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.055599 4758 scope.go:117] "RemoveContainer" containerID="cb775ec2ee99ffed411c83db7c8c8f39801fd5654096db2c76a7443041b48ca9" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.055592 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.069172 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5974d6465c-97ckq"] Jan 30 08:57:12 crc kubenswrapper[4758]: W0130 08:57:12.073532 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0bdec68_2725_4515_b6f0_dc421cec4a6d.slice/crio-875a1e7b2f28ea73f14e324df5b8439b48c8b55804c659cf3b46d58849035ac8 WatchSource:0}: Error finding container 875a1e7b2f28ea73f14e324df5b8439b48c8b55804c659cf3b46d58849035ac8: Status 404 returned error can't find the container with id 875a1e7b2f28ea73f14e324df5b8439b48c8b55804c659cf3b46d58849035ac8 Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.081195 4758 scope.go:117] "RemoveContainer" containerID="858a4cc294f2673581b5056b6b3f2795b013fb5990368406beb8a506660b666f" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.093470 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.103393 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.127729 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:57:12 crc kubenswrapper[4758]: E0130 08:57:12.128263 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerName="setup-container" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.128287 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerName="setup-container" Jan 30 08:57:12 crc kubenswrapper[4758]: E0130 08:57:12.128317 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerName="rabbitmq" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.128326 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerName="rabbitmq" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.128560 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerName="rabbitmq" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.133413 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.137405 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.137856 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.138615 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nwnvg" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.138650 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.138756 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.138757 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.138801 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.147003 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210288 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec0e1aed-0ac5-4482-906f-89c9243729ea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210346 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210389 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210419 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210438 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210472 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhcqj\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-kube-api-access-xhcqj\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210494 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec0e1aed-0ac5-4482-906f-89c9243729ea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210529 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210552 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210569 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210597 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312242 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec0e1aed-0ac5-4482-906f-89c9243729ea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312315 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312367 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312407 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312425 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312461 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhcqj\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-kube-api-access-xhcqj\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312493 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec0e1aed-0ac5-4482-906f-89c9243729ea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312517 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312542 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312559 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312588 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.313465 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.313553 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.313652 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.313856 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.313999 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.314167 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.396980 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec0e1aed-0ac5-4482-906f-89c9243729ea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.397634 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.398341 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.398908 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec0e1aed-0ac5-4482-906f-89c9243729ea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.399066 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhcqj\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-kube-api-access-xhcqj\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.437071 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.472241 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.792587 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:57:13 crc kubenswrapper[4758]: I0130 08:57:13.064769 4758 generic.go:334] "Generic (PLEG): container finished" podID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" containerID="70834c4c0f2e5700560407708799402eb1ed5ab917b05c871599639beb6a62b7" exitCode=0 Jan 30 08:57:13 crc kubenswrapper[4758]: I0130 08:57:13.064844 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" event={"ID":"f0bdec68-2725-4515-b6f0-dc421cec4a6d","Type":"ContainerDied","Data":"70834c4c0f2e5700560407708799402eb1ed5ab917b05c871599639beb6a62b7"} Jan 30 08:57:13 crc kubenswrapper[4758]: I0130 08:57:13.064873 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" event={"ID":"f0bdec68-2725-4515-b6f0-dc421cec4a6d","Type":"ContainerStarted","Data":"875a1e7b2f28ea73f14e324df5b8439b48c8b55804c659cf3b46d58849035ac8"} Jan 30 08:57:13 crc kubenswrapper[4758]: I0130 08:57:13.069184 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec0e1aed-0ac5-4482-906f-89c9243729ea","Type":"ContainerStarted","Data":"38f4b6057dda17857234eb47990f19c49f4f2cf41a81db9065c70f1cc114ff6a"} Jan 30 08:57:13 crc kubenswrapper[4758]: I0130 08:57:13.779662 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" path="/var/lib/kubelet/pods/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5/volumes" Jan 30 08:57:14 crc kubenswrapper[4758]: I0130 08:57:14.079308 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c72311ae-5d7e-4978-a690-a9bee0b3672b","Type":"ContainerStarted","Data":"2594c87d64066e1c3d6a70cb5a8bd66af20a2abcdf92c45da3629b70a6baade1"} Jan 30 08:57:14 crc kubenswrapper[4758]: I0130 08:57:14.082355 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" event={"ID":"f0bdec68-2725-4515-b6f0-dc421cec4a6d","Type":"ContainerStarted","Data":"f270b22101baa7b3b43a0008d6e88075cab382194aed35d24bec99c8f732021a"} Jan 30 08:57:14 crc kubenswrapper[4758]: I0130 08:57:14.083029 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:15 crc kubenswrapper[4758]: I0130 08:57:15.092601 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec0e1aed-0ac5-4482-906f-89c9243729ea","Type":"ContainerStarted","Data":"d1c642bff2555442bfe6d2153832a4b302a1b8e531a2b0373b69b695dfe508bc"} Jan 30 08:57:15 crc kubenswrapper[4758]: I0130 08:57:15.121335 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" podStartSLOduration=5.121313233 podStartE2EDuration="5.121313233s" podCreationTimestamp="2026-01-30 08:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:57:14.132204256 +0000 UTC m=+1639.104515807" watchObservedRunningTime="2026-01-30 08:57:15.121313233 +0000 UTC m=+1640.093624794" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.277952 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bcpjf"] Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.283238 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.290097 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcpjf"] Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.328806 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kqtm\" (UniqueName: \"kubernetes.io/projected/e3009f89-995c-4ed2-81e9-3b8e76c970d0-kube-api-access-4kqtm\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.328895 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-utilities\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.328954 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-catalog-content\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.430294 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kqtm\" (UniqueName: \"kubernetes.io/projected/e3009f89-995c-4ed2-81e9-3b8e76c970d0-kube-api-access-4kqtm\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.430353 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-utilities\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.430392 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-catalog-content\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.431100 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-utilities\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.431130 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-catalog-content\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.462891 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kqtm\" (UniqueName: \"kubernetes.io/projected/e3009f89-995c-4ed2-81e9-3b8e76c970d0-kube-api-access-4kqtm\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.620266 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:18 crc kubenswrapper[4758]: I0130 08:57:18.184461 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcpjf"] Jan 30 08:57:18 crc kubenswrapper[4758]: W0130 08:57:18.189239 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3009f89_995c_4ed2_81e9_3b8e76c970d0.slice/crio-4a05885829e22ba308af7bf44752f9e5e5cfe220f6ad236e71b15d0f3d157c15 WatchSource:0}: Error finding container 4a05885829e22ba308af7bf44752f9e5e5cfe220f6ad236e71b15d0f3d157c15: Status 404 returned error can't find the container with id 4a05885829e22ba308af7bf44752f9e5e5cfe220f6ad236e71b15d0f3d157c15 Jan 30 08:57:19 crc kubenswrapper[4758]: I0130 08:57:19.129566 4758 generic.go:334] "Generic (PLEG): container finished" podID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerID="cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298" exitCode=0 Jan 30 08:57:19 crc kubenswrapper[4758]: I0130 08:57:19.129648 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcpjf" event={"ID":"e3009f89-995c-4ed2-81e9-3b8e76c970d0","Type":"ContainerDied","Data":"cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298"} Jan 30 08:57:19 crc kubenswrapper[4758]: I0130 08:57:19.129862 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcpjf" event={"ID":"e3009f89-995c-4ed2-81e9-3b8e76c970d0","Type":"ContainerStarted","Data":"4a05885829e22ba308af7bf44752f9e5e5cfe220f6ad236e71b15d0f3d157c15"} Jan 30 08:57:19 crc kubenswrapper[4758]: I0130 08:57:19.133121 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.150879 4758 generic.go:334] "Generic (PLEG): container finished" podID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerID="9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f" exitCode=0 Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.151060 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcpjf" event={"ID":"e3009f89-995c-4ed2-81e9-3b8e76c970d0","Type":"ContainerDied","Data":"9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f"} Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.405244 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.493688 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c8fb88b59-pmngp"] Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.493979 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" podUID="846a15fd-f202-4eaf-b346-b66916daa7d1" containerName="dnsmasq-dns" containerID="cri-o://38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0" gracePeriod=10 Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.721009 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-549cc57c95-cpkk5"] Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.722617 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.750191 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549cc57c95-cpkk5"] Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.829908 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-ovsdbserver-nb\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.830228 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-dns-svc\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.830304 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kvxk\" (UniqueName: \"kubernetes.io/projected/cdf54085-1dd0-4eb1-9640-e75c69be5a44-kube-api-access-9kvxk\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.830328 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-dns-swift-storage-0\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.830609 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-openstack-edpm-ipam\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.830668 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-ovsdbserver-sb\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.830696 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-config\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.932378 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-ovsdbserver-nb\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.932484 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-dns-svc\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.932572 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kvxk\" (UniqueName: \"kubernetes.io/projected/cdf54085-1dd0-4eb1-9640-e75c69be5a44-kube-api-access-9kvxk\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.932601 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-dns-swift-storage-0\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.932654 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-openstack-edpm-ipam\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.932676 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-ovsdbserver-sb\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.932693 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-config\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.933591 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-config\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.934269 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-ovsdbserver-nb\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.936339 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-dns-swift-storage-0\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.936771 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-openstack-edpm-ipam\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.936819 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-dns-svc\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.938531 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-ovsdbserver-sb\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.961455 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kvxk\" (UniqueName: \"kubernetes.io/projected/cdf54085-1dd0-4eb1-9640-e75c69be5a44-kube-api-access-9kvxk\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.042186 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.147287 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.200306 4758 generic.go:334] "Generic (PLEG): container finished" podID="846a15fd-f202-4eaf-b346-b66916daa7d1" containerID="38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0" exitCode=0 Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.200360 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" event={"ID":"846a15fd-f202-4eaf-b346-b66916daa7d1","Type":"ContainerDied","Data":"38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0"} Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.200391 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" event={"ID":"846a15fd-f202-4eaf-b346-b66916daa7d1","Type":"ContainerDied","Data":"a0aeac30b993bcb0e0326076acc7ca6b11ca6918894e3364821c28d868e8c055"} Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.200410 4758 scope.go:117] "RemoveContainer" containerID="38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.200574 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.275370 4758 scope.go:117] "RemoveContainer" containerID="a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.327307 4758 scope.go:117] "RemoveContainer" containerID="38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0" Jan 30 08:57:22 crc kubenswrapper[4758]: E0130 08:57:22.327993 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0\": container with ID starting with 38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0 not found: ID does not exist" containerID="38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.328025 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0"} err="failed to get container status \"38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0\": rpc error: code = NotFound desc = could not find container \"38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0\": container with ID starting with 38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0 not found: ID does not exist" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.328071 4758 scope.go:117] "RemoveContainer" containerID="a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb" Jan 30 08:57:22 crc kubenswrapper[4758]: E0130 08:57:22.331119 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb\": container with ID starting with a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb not found: ID does not exist" containerID="a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.331177 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb"} err="failed to get container status \"a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb\": rpc error: code = NotFound desc = could not find container \"a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb\": container with ID starting with a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb not found: ID does not exist" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.342941 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-sb\") pod \"846a15fd-f202-4eaf-b346-b66916daa7d1\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.343009 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7xsj\" (UniqueName: \"kubernetes.io/projected/846a15fd-f202-4eaf-b346-b66916daa7d1-kube-api-access-r7xsj\") pod \"846a15fd-f202-4eaf-b346-b66916daa7d1\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.343142 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-svc\") pod \"846a15fd-f202-4eaf-b346-b66916daa7d1\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.343191 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-nb\") pod \"846a15fd-f202-4eaf-b346-b66916daa7d1\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.343284 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-config\") pod \"846a15fd-f202-4eaf-b346-b66916daa7d1\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.343309 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-swift-storage-0\") pod \"846a15fd-f202-4eaf-b346-b66916daa7d1\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.353227 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/846a15fd-f202-4eaf-b346-b66916daa7d1-kube-api-access-r7xsj" (OuterVolumeSpecName: "kube-api-access-r7xsj") pod "846a15fd-f202-4eaf-b346-b66916daa7d1" (UID: "846a15fd-f202-4eaf-b346-b66916daa7d1"). InnerVolumeSpecName "kube-api-access-r7xsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.407189 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "846a15fd-f202-4eaf-b346-b66916daa7d1" (UID: "846a15fd-f202-4eaf-b346-b66916daa7d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.418415 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-config" (OuterVolumeSpecName: "config") pod "846a15fd-f202-4eaf-b346-b66916daa7d1" (UID: "846a15fd-f202-4eaf-b346-b66916daa7d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.425166 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "846a15fd-f202-4eaf-b346-b66916daa7d1" (UID: "846a15fd-f202-4eaf-b346-b66916daa7d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:22 crc kubenswrapper[4758]: W0130 08:57:22.430249 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdf54085_1dd0_4eb1_9640_e75c69be5a44.slice/crio-33d3901a476edeed4cacb18dd3ad0e70806fb9791f11013195e07aea23d5b852 WatchSource:0}: Error finding container 33d3901a476edeed4cacb18dd3ad0e70806fb9791f11013195e07aea23d5b852: Status 404 returned error can't find the container with id 33d3901a476edeed4cacb18dd3ad0e70806fb9791f11013195e07aea23d5b852 Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.433725 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549cc57c95-cpkk5"] Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.440689 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "846a15fd-f202-4eaf-b346-b66916daa7d1" (UID: "846a15fd-f202-4eaf-b346-b66916daa7d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.444589 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "846a15fd-f202-4eaf-b346-b66916daa7d1" (UID: "846a15fd-f202-4eaf-b346-b66916daa7d1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.444738 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-swift-storage-0\") pod \"846a15fd-f202-4eaf-b346-b66916daa7d1\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " Jan 30 08:57:22 crc kubenswrapper[4758]: W0130 08:57:22.444964 4758 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/846a15fd-f202-4eaf-b346-b66916daa7d1/volumes/kubernetes.io~configmap/dns-swift-storage-0 Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.444980 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "846a15fd-f202-4eaf-b346-b66916daa7d1" (UID: "846a15fd-f202-4eaf-b346-b66916daa7d1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.445455 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.445478 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7xsj\" (UniqueName: \"kubernetes.io/projected/846a15fd-f202-4eaf-b346-b66916daa7d1-kube-api-access-r7xsj\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.445495 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.445506 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.445517 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.445528 4758 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.556587 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c8fb88b59-pmngp"] Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.574864 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c8fb88b59-pmngp"] Jan 30 08:57:23 crc kubenswrapper[4758]: I0130 08:57:23.211596 4758 generic.go:334] "Generic (PLEG): container finished" podID="cdf54085-1dd0-4eb1-9640-e75c69be5a44" containerID="3b6a35e07ed137c26f25f1b0a851b95f307d71d7b1ca35f3be1a3b3596176f32" exitCode=0 Jan 30 08:57:23 crc kubenswrapper[4758]: I0130 08:57:23.211695 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" event={"ID":"cdf54085-1dd0-4eb1-9640-e75c69be5a44","Type":"ContainerDied","Data":"3b6a35e07ed137c26f25f1b0a851b95f307d71d7b1ca35f3be1a3b3596176f32"} Jan 30 08:57:23 crc kubenswrapper[4758]: I0130 08:57:23.212017 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" event={"ID":"cdf54085-1dd0-4eb1-9640-e75c69be5a44","Type":"ContainerStarted","Data":"33d3901a476edeed4cacb18dd3ad0e70806fb9791f11013195e07aea23d5b852"} Jan 30 08:57:23 crc kubenswrapper[4758]: I0130 08:57:23.215146 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcpjf" event={"ID":"e3009f89-995c-4ed2-81e9-3b8e76c970d0","Type":"ContainerStarted","Data":"f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9"} Jan 30 08:57:23 crc kubenswrapper[4758]: I0130 08:57:23.339484 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bcpjf" podStartSLOduration=3.652808698 podStartE2EDuration="6.339458389s" podCreationTimestamp="2026-01-30 08:57:17 +0000 UTC" firstStartedPulling="2026-01-30 08:57:19.132883274 +0000 UTC m=+1644.105194815" lastFinishedPulling="2026-01-30 08:57:21.819532955 +0000 UTC m=+1646.791844506" observedRunningTime="2026-01-30 08:57:23.288791456 +0000 UTC m=+1648.261103037" watchObservedRunningTime="2026-01-30 08:57:23.339458389 +0000 UTC m=+1648.311769940" Jan 30 08:57:23 crc kubenswrapper[4758]: I0130 08:57:23.769185 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:57:23 crc kubenswrapper[4758]: E0130 08:57:23.769798 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:57:23 crc kubenswrapper[4758]: I0130 08:57:23.782852 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="846a15fd-f202-4eaf-b346-b66916daa7d1" path="/var/lib/kubelet/pods/846a15fd-f202-4eaf-b346-b66916daa7d1/volumes" Jan 30 08:57:24 crc kubenswrapper[4758]: I0130 08:57:24.227332 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" event={"ID":"cdf54085-1dd0-4eb1-9640-e75c69be5a44","Type":"ContainerStarted","Data":"063d8bd7528aa4601f74ef5a25c0b2f9f965f92f19a8a77ac171f162cfc1e542"} Jan 30 08:57:24 crc kubenswrapper[4758]: I0130 08:57:24.259127 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" podStartSLOduration=3.259110975 podStartE2EDuration="3.259110975s" podCreationTimestamp="2026-01-30 08:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:57:24.247778911 +0000 UTC m=+1649.220090472" watchObservedRunningTime="2026-01-30 08:57:24.259110975 +0000 UTC m=+1649.231422536" Jan 30 08:57:25 crc kubenswrapper[4758]: I0130 08:57:25.234515 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:27 crc kubenswrapper[4758]: I0130 08:57:27.620825 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:27 crc kubenswrapper[4758]: I0130 08:57:27.621393 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:27 crc kubenswrapper[4758]: I0130 08:57:27.681986 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:28 crc kubenswrapper[4758]: I0130 08:57:28.310404 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:28 crc kubenswrapper[4758]: I0130 08:57:28.371535 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcpjf"] Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.272259 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bcpjf" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerName="registry-server" containerID="cri-o://f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9" gracePeriod=2 Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.702301 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.813239 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-utilities\") pod \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.813377 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-catalog-content\") pod \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.813508 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kqtm\" (UniqueName: \"kubernetes.io/projected/e3009f89-995c-4ed2-81e9-3b8e76c970d0-kube-api-access-4kqtm\") pod \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.815243 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-utilities" (OuterVolumeSpecName: "utilities") pod "e3009f89-995c-4ed2-81e9-3b8e76c970d0" (UID: "e3009f89-995c-4ed2-81e9-3b8e76c970d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.824878 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3009f89-995c-4ed2-81e9-3b8e76c970d0-kube-api-access-4kqtm" (OuterVolumeSpecName: "kube-api-access-4kqtm") pod "e3009f89-995c-4ed2-81e9-3b8e76c970d0" (UID: "e3009f89-995c-4ed2-81e9-3b8e76c970d0"). InnerVolumeSpecName "kube-api-access-4kqtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.906170 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3009f89-995c-4ed2-81e9-3b8e76c970d0" (UID: "e3009f89-995c-4ed2-81e9-3b8e76c970d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.916355 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kqtm\" (UniqueName: \"kubernetes.io/projected/e3009f89-995c-4ed2-81e9-3b8e76c970d0-kube-api-access-4kqtm\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.916539 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.916596 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.281860 4758 generic.go:334] "Generic (PLEG): container finished" podID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerID="f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9" exitCode=0 Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.281911 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcpjf" event={"ID":"e3009f89-995c-4ed2-81e9-3b8e76c970d0","Type":"ContainerDied","Data":"f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9"} Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.282168 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcpjf" event={"ID":"e3009f89-995c-4ed2-81e9-3b8e76c970d0","Type":"ContainerDied","Data":"4a05885829e22ba308af7bf44752f9e5e5cfe220f6ad236e71b15d0f3d157c15"} Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.282192 4758 scope.go:117] "RemoveContainer" containerID="f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.282210 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.315764 4758 scope.go:117] "RemoveContainer" containerID="9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.332942 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcpjf"] Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.346240 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcpjf"] Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.357688 4758 scope.go:117] "RemoveContainer" containerID="cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.386951 4758 scope.go:117] "RemoveContainer" containerID="f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9" Jan 30 08:57:31 crc kubenswrapper[4758]: E0130 08:57:31.387843 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9\": container with ID starting with f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9 not found: ID does not exist" containerID="f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.387896 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9"} err="failed to get container status \"f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9\": rpc error: code = NotFound desc = could not find container \"f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9\": container with ID starting with f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9 not found: ID does not exist" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.387932 4758 scope.go:117] "RemoveContainer" containerID="9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f" Jan 30 08:57:31 crc kubenswrapper[4758]: E0130 08:57:31.388534 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f\": container with ID starting with 9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f not found: ID does not exist" containerID="9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.388592 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f"} err="failed to get container status \"9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f\": rpc error: code = NotFound desc = could not find container \"9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f\": container with ID starting with 9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f not found: ID does not exist" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.388621 4758 scope.go:117] "RemoveContainer" containerID="cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298" Jan 30 08:57:31 crc kubenswrapper[4758]: E0130 08:57:31.389102 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298\": container with ID starting with cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298 not found: ID does not exist" containerID="cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.389135 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298"} err="failed to get container status \"cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298\": rpc error: code = NotFound desc = could not find container \"cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298\": container with ID starting with cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298 not found: ID does not exist" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.782605 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" path="/var/lib/kubelet/pods/e3009f89-995c-4ed2-81e9-3b8e76c970d0/volumes" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.044419 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.137082 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5974d6465c-97ckq"] Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.137393 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" podUID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" containerName="dnsmasq-dns" containerID="cri-o://f270b22101baa7b3b43a0008d6e88075cab382194aed35d24bec99c8f732021a" gracePeriod=10 Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.304513 4758 generic.go:334] "Generic (PLEG): container finished" podID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" containerID="f270b22101baa7b3b43a0008d6e88075cab382194aed35d24bec99c8f732021a" exitCode=0 Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.304590 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" event={"ID":"f0bdec68-2725-4515-b6f0-dc421cec4a6d","Type":"ContainerDied","Data":"f270b22101baa7b3b43a0008d6e88075cab382194aed35d24bec99c8f732021a"} Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.633498 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.766538 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-openstack-edpm-ipam\") pod \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.766690 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-swift-storage-0\") pod \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.766755 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-svc\") pod \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.766784 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-886xf\" (UniqueName: \"kubernetes.io/projected/f0bdec68-2725-4515-b6f0-dc421cec4a6d-kube-api-access-886xf\") pod \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.767451 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-nb\") pod \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.767515 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-config\") pod \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.767626 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-sb\") pod \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.781114 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0bdec68-2725-4515-b6f0-dc421cec4a6d-kube-api-access-886xf" (OuterVolumeSpecName: "kube-api-access-886xf") pod "f0bdec68-2725-4515-b6f0-dc421cec4a6d" (UID: "f0bdec68-2725-4515-b6f0-dc421cec4a6d"). InnerVolumeSpecName "kube-api-access-886xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.831483 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f0bdec68-2725-4515-b6f0-dc421cec4a6d" (UID: "f0bdec68-2725-4515-b6f0-dc421cec4a6d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.839397 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-config" (OuterVolumeSpecName: "config") pod "f0bdec68-2725-4515-b6f0-dc421cec4a6d" (UID: "f0bdec68-2725-4515-b6f0-dc421cec4a6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.848488 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f0bdec68-2725-4515-b6f0-dc421cec4a6d" (UID: "f0bdec68-2725-4515-b6f0-dc421cec4a6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.850591 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0bdec68-2725-4515-b6f0-dc421cec4a6d" (UID: "f0bdec68-2725-4515-b6f0-dc421cec4a6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.850825 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f0bdec68-2725-4515-b6f0-dc421cec4a6d" (UID: "f0bdec68-2725-4515-b6f0-dc421cec4a6d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.853509 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f0bdec68-2725-4515-b6f0-dc421cec4a6d" (UID: "f0bdec68-2725-4515-b6f0-dc421cec4a6d"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.870152 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.870257 4758 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.870272 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.870286 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-886xf\" (UniqueName: \"kubernetes.io/projected/f0bdec68-2725-4515-b6f0-dc421cec4a6d-kube-api-access-886xf\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.870298 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.870309 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.870327 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:33 crc kubenswrapper[4758]: I0130 08:57:33.317676 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" event={"ID":"f0bdec68-2725-4515-b6f0-dc421cec4a6d","Type":"ContainerDied","Data":"875a1e7b2f28ea73f14e324df5b8439b48c8b55804c659cf3b46d58849035ac8"} Jan 30 08:57:33 crc kubenswrapper[4758]: I0130 08:57:33.319024 4758 scope.go:117] "RemoveContainer" containerID="f270b22101baa7b3b43a0008d6e88075cab382194aed35d24bec99c8f732021a" Jan 30 08:57:33 crc kubenswrapper[4758]: I0130 08:57:33.317737 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:33 crc kubenswrapper[4758]: I0130 08:57:33.343375 4758 scope.go:117] "RemoveContainer" containerID="70834c4c0f2e5700560407708799402eb1ed5ab917b05c871599639beb6a62b7" Jan 30 08:57:33 crc kubenswrapper[4758]: I0130 08:57:33.354183 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5974d6465c-97ckq"] Jan 30 08:57:33 crc kubenswrapper[4758]: I0130 08:57:33.362901 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5974d6465c-97ckq"] Jan 30 08:57:33 crc kubenswrapper[4758]: I0130 08:57:33.783886 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" path="/var/lib/kubelet/pods/f0bdec68-2725-4515-b6f0-dc421cec4a6d/volumes" Jan 30 08:57:37 crc kubenswrapper[4758]: I0130 08:57:37.769284 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:57:37 crc kubenswrapper[4758]: E0130 08:57:37.771388 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:57:45 crc kubenswrapper[4758]: I0130 08:57:45.425713 4758 generic.go:334] "Generic (PLEG): container finished" podID="c72311ae-5d7e-4978-a690-a9bee0b3672b" containerID="2594c87d64066e1c3d6a70cb5a8bd66af20a2abcdf92c45da3629b70a6baade1" exitCode=0 Jan 30 08:57:45 crc kubenswrapper[4758]: I0130 08:57:45.425801 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c72311ae-5d7e-4978-a690-a9bee0b3672b","Type":"ContainerDied","Data":"2594c87d64066e1c3d6a70cb5a8bd66af20a2abcdf92c45da3629b70a6baade1"} Jan 30 08:57:46 crc kubenswrapper[4758]: I0130 08:57:46.437073 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c72311ae-5d7e-4978-a690-a9bee0b3672b","Type":"ContainerStarted","Data":"64e43eaf45a9a2a28f864e37fff3b5dd18d98e963795ea05944fb9ada14ebbb5"} Jan 30 08:57:46 crc kubenswrapper[4758]: I0130 08:57:46.437601 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 08:57:46 crc kubenswrapper[4758]: I0130 08:57:46.438813 4758 generic.go:334] "Generic (PLEG): container finished" podID="ec0e1aed-0ac5-4482-906f-89c9243729ea" containerID="d1c642bff2555442bfe6d2153832a4b302a1b8e531a2b0373b69b695dfe508bc" exitCode=0 Jan 30 08:57:46 crc kubenswrapper[4758]: I0130 08:57:46.438853 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec0e1aed-0ac5-4482-906f-89c9243729ea","Type":"ContainerDied","Data":"d1c642bff2555442bfe6d2153832a4b302a1b8e531a2b0373b69b695dfe508bc"} Jan 30 08:57:46 crc kubenswrapper[4758]: I0130 08:57:46.523749 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.523705114 podStartE2EDuration="36.523705114s" podCreationTimestamp="2026-01-30 08:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:57:46.497369572 +0000 UTC m=+1671.469681153" watchObservedRunningTime="2026-01-30 08:57:46.523705114 +0000 UTC m=+1671.496016665" Jan 30 08:57:47 crc kubenswrapper[4758]: I0130 08:57:47.449531 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec0e1aed-0ac5-4482-906f-89c9243729ea","Type":"ContainerStarted","Data":"f603760bbfdcc2cb03be1a71e3dff268e341a771ea695ad0b4217bf64941a548"} Jan 30 08:57:47 crc kubenswrapper[4758]: I0130 08:57:47.450116 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:47 crc kubenswrapper[4758]: I0130 08:57:47.483584 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.483559137 podStartE2EDuration="35.483559137s" podCreationTimestamp="2026-01-30 08:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:57:47.472850122 +0000 UTC m=+1672.445161683" watchObservedRunningTime="2026-01-30 08:57:47.483559137 +0000 UTC m=+1672.455870698" Jan 30 08:57:49 crc kubenswrapper[4758]: I0130 08:57:49.769200 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:57:49 crc kubenswrapper[4758]: E0130 08:57:49.769521 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.373084 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c"] Jan 30 08:57:56 crc kubenswrapper[4758]: E0130 08:57:56.374061 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" containerName="dnsmasq-dns" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374077 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" containerName="dnsmasq-dns" Jan 30 08:57:56 crc kubenswrapper[4758]: E0130 08:57:56.374098 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846a15fd-f202-4eaf-b346-b66916daa7d1" containerName="dnsmasq-dns" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374104 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="846a15fd-f202-4eaf-b346-b66916daa7d1" containerName="dnsmasq-dns" Jan 30 08:57:56 crc kubenswrapper[4758]: E0130 08:57:56.374122 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerName="extract-utilities" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374147 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerName="extract-utilities" Jan 30 08:57:56 crc kubenswrapper[4758]: E0130 08:57:56.374154 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerName="registry-server" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374161 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerName="registry-server" Jan 30 08:57:56 crc kubenswrapper[4758]: E0130 08:57:56.374177 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846a15fd-f202-4eaf-b346-b66916daa7d1" containerName="init" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374182 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="846a15fd-f202-4eaf-b346-b66916daa7d1" containerName="init" Jan 30 08:57:56 crc kubenswrapper[4758]: E0130 08:57:56.374196 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerName="extract-content" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374202 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerName="extract-content" Jan 30 08:57:56 crc kubenswrapper[4758]: E0130 08:57:56.374226 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" containerName="init" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374231 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" containerName="init" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374423 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerName="registry-server" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374447 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" containerName="dnsmasq-dns" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374462 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="846a15fd-f202-4eaf-b346-b66916daa7d1" containerName="dnsmasq-dns" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.375082 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.381819 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.382108 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.382134 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.382301 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.399072 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c"] Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.518411 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.518725 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8crm\" (UniqueName: \"kubernetes.io/projected/350797d0-7758-4c0d-84cb-ec160451f377-kube-api-access-s8crm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.518761 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.518826 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.620272 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.620331 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8crm\" (UniqueName: \"kubernetes.io/projected/350797d0-7758-4c0d-84cb-ec160451f377-kube-api-access-s8crm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.620370 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.620435 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.626875 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.627960 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.633743 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.636913 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8crm\" (UniqueName: \"kubernetes.io/projected/350797d0-7758-4c0d-84cb-ec160451f377-kube-api-access-s8crm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.715886 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:57 crc kubenswrapper[4758]: I0130 08:57:57.278080 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c"] Jan 30 08:57:57 crc kubenswrapper[4758]: I0130 08:57:57.539629 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" event={"ID":"350797d0-7758-4c0d-84cb-ec160451f377","Type":"ContainerStarted","Data":"0be4b73b6f0e1e461c8cd4108115af90102d388e9f0209704f5e5094f4660b61"} Jan 30 08:58:01 crc kubenswrapper[4758]: I0130 08:58:01.096265 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 08:58:02 crc kubenswrapper[4758]: I0130 08:58:02.476264 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:58:04 crc kubenswrapper[4758]: I0130 08:58:04.769722 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:58:04 crc kubenswrapper[4758]: E0130 08:58:04.773274 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:58:06 crc kubenswrapper[4758]: I0130 08:58:06.039538 4758 scope.go:117] "RemoveContainer" containerID="67149206f58ad0507606cce2e343d6cc655be53f23f559d70aa0ec8a091a7ab5" Jan 30 08:58:07 crc kubenswrapper[4758]: I0130 08:58:07.918928 4758 scope.go:117] "RemoveContainer" containerID="e4c6a196b48061d2cc6a1f8d240ab8cead89ed7d7e814ad3fae5ef4e05e106b8" Jan 30 08:58:08 crc kubenswrapper[4758]: I0130 08:58:08.642767 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" event={"ID":"350797d0-7758-4c0d-84cb-ec160451f377","Type":"ContainerStarted","Data":"eaf998eb2834ee46fee04c4817b18dcf9c68fe885758907b76d71ee69c1a09dc"} Jan 30 08:58:08 crc kubenswrapper[4758]: I0130 08:58:08.667101 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" podStartSLOduration=1.965461689 podStartE2EDuration="12.667082166s" podCreationTimestamp="2026-01-30 08:57:56 +0000 UTC" firstStartedPulling="2026-01-30 08:57:57.292964235 +0000 UTC m=+1682.265275786" lastFinishedPulling="2026-01-30 08:58:07.994584712 +0000 UTC m=+1692.966896263" observedRunningTime="2026-01-30 08:58:08.656584608 +0000 UTC m=+1693.628896179" watchObservedRunningTime="2026-01-30 08:58:08.667082166 +0000 UTC m=+1693.639393717" Jan 30 08:58:15 crc kubenswrapper[4758]: I0130 08:58:15.777346 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:58:15 crc kubenswrapper[4758]: E0130 08:58:15.778263 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:58:19 crc kubenswrapper[4758]: I0130 08:58:19.740695 4758 generic.go:334] "Generic (PLEG): container finished" podID="350797d0-7758-4c0d-84cb-ec160451f377" containerID="eaf998eb2834ee46fee04c4817b18dcf9c68fe885758907b76d71ee69c1a09dc" exitCode=0 Jan 30 08:58:19 crc kubenswrapper[4758]: I0130 08:58:19.740824 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" event={"ID":"350797d0-7758-4c0d-84cb-ec160451f377","Type":"ContainerDied","Data":"eaf998eb2834ee46fee04c4817b18dcf9c68fe885758907b76d71ee69c1a09dc"} Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.266369 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.348993 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-repo-setup-combined-ca-bundle\") pod \"350797d0-7758-4c0d-84cb-ec160451f377\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.349468 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8crm\" (UniqueName: \"kubernetes.io/projected/350797d0-7758-4c0d-84cb-ec160451f377-kube-api-access-s8crm\") pod \"350797d0-7758-4c0d-84cb-ec160451f377\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.349649 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-inventory\") pod \"350797d0-7758-4c0d-84cb-ec160451f377\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.350542 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-ssh-key-openstack-edpm-ipam\") pod \"350797d0-7758-4c0d-84cb-ec160451f377\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.357450 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/350797d0-7758-4c0d-84cb-ec160451f377-kube-api-access-s8crm" (OuterVolumeSpecName: "kube-api-access-s8crm") pod "350797d0-7758-4c0d-84cb-ec160451f377" (UID: "350797d0-7758-4c0d-84cb-ec160451f377"). InnerVolumeSpecName "kube-api-access-s8crm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.358192 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "350797d0-7758-4c0d-84cb-ec160451f377" (UID: "350797d0-7758-4c0d-84cb-ec160451f377"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.378876 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-inventory" (OuterVolumeSpecName: "inventory") pod "350797d0-7758-4c0d-84cb-ec160451f377" (UID: "350797d0-7758-4c0d-84cb-ec160451f377"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.382570 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "350797d0-7758-4c0d-84cb-ec160451f377" (UID: "350797d0-7758-4c0d-84cb-ec160451f377"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.453867 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.454099 4758 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.454166 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8crm\" (UniqueName: \"kubernetes.io/projected/350797d0-7758-4c0d-84cb-ec160451f377-kube-api-access-s8crm\") on node \"crc\" DevicePath \"\"" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.454230 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.765889 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" event={"ID":"350797d0-7758-4c0d-84cb-ec160451f377","Type":"ContainerDied","Data":"0be4b73b6f0e1e461c8cd4108115af90102d388e9f0209704f5e5094f4660b61"} Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.765952 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0be4b73b6f0e1e461c8cd4108115af90102d388e9f0209704f5e5094f4660b61" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.766366 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.874783 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b"] Jan 30 08:58:21 crc kubenswrapper[4758]: E0130 08:58:21.875514 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350797d0-7758-4c0d-84cb-ec160451f377" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.875546 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="350797d0-7758-4c0d-84cb-ec160451f377" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.875762 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="350797d0-7758-4c0d-84cb-ec160451f377" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.876679 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.879557 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.879637 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.879681 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.879561 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.889418 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b"] Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.970169 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rb8m\" (UniqueName: \"kubernetes.io/projected/a7d607da-5923-4ef5-82ba-083df5db3864-kube-api-access-7rb8m\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.970579 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.970688 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.072636 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.072709 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.072744 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rb8m\" (UniqueName: \"kubernetes.io/projected/a7d607da-5923-4ef5-82ba-083df5db3864-kube-api-access-7rb8m\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.076759 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.080878 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.090909 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rb8m\" (UniqueName: \"kubernetes.io/projected/a7d607da-5923-4ef5-82ba-083df5db3864-kube-api-access-7rb8m\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.195790 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.734119 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b"] Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.776887 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" event={"ID":"a7d607da-5923-4ef5-82ba-083df5db3864","Type":"ContainerStarted","Data":"6029110f7b585297305a3e0da33da93e6ca65c743da75995e51c90909b2039a8"} Jan 30 08:58:23 crc kubenswrapper[4758]: I0130 08:58:23.799990 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" event={"ID":"a7d607da-5923-4ef5-82ba-083df5db3864","Type":"ContainerStarted","Data":"49082d8328ca84ebe4152bfeac7eee5e846d4e7cedf87c007c5c3ea875579b61"} Jan 30 08:58:23 crc kubenswrapper[4758]: I0130 08:58:23.820001 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" podStartSLOduration=2.082644262 podStartE2EDuration="2.819958312s" podCreationTimestamp="2026-01-30 08:58:21 +0000 UTC" firstStartedPulling="2026-01-30 08:58:22.725575095 +0000 UTC m=+1707.697886646" lastFinishedPulling="2026-01-30 08:58:23.462889145 +0000 UTC m=+1708.435200696" observedRunningTime="2026-01-30 08:58:23.819699223 +0000 UTC m=+1708.792010774" watchObservedRunningTime="2026-01-30 08:58:23.819958312 +0000 UTC m=+1708.792269863" Jan 30 08:58:26 crc kubenswrapper[4758]: I0130 08:58:26.827539 4758 generic.go:334] "Generic (PLEG): container finished" podID="a7d607da-5923-4ef5-82ba-083df5db3864" containerID="49082d8328ca84ebe4152bfeac7eee5e846d4e7cedf87c007c5c3ea875579b61" exitCode=0 Jan 30 08:58:26 crc kubenswrapper[4758]: I0130 08:58:26.827626 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" event={"ID":"a7d607da-5923-4ef5-82ba-083df5db3864","Type":"ContainerDied","Data":"49082d8328ca84ebe4152bfeac7eee5e846d4e7cedf87c007c5c3ea875579b61"} Jan 30 08:58:27 crc kubenswrapper[4758]: I0130 08:58:27.769263 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:58:27 crc kubenswrapper[4758]: E0130 08:58:27.769547 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.324980 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.401168 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-inventory\") pod \"a7d607da-5923-4ef5-82ba-083df5db3864\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.401214 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-ssh-key-openstack-edpm-ipam\") pod \"a7d607da-5923-4ef5-82ba-083df5db3864\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.401267 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rb8m\" (UniqueName: \"kubernetes.io/projected/a7d607da-5923-4ef5-82ba-083df5db3864-kube-api-access-7rb8m\") pod \"a7d607da-5923-4ef5-82ba-083df5db3864\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.414595 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7d607da-5923-4ef5-82ba-083df5db3864-kube-api-access-7rb8m" (OuterVolumeSpecName: "kube-api-access-7rb8m") pod "a7d607da-5923-4ef5-82ba-083df5db3864" (UID: "a7d607da-5923-4ef5-82ba-083df5db3864"). InnerVolumeSpecName "kube-api-access-7rb8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.433843 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a7d607da-5923-4ef5-82ba-083df5db3864" (UID: "a7d607da-5923-4ef5-82ba-083df5db3864"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.460638 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-inventory" (OuterVolumeSpecName: "inventory") pod "a7d607da-5923-4ef5-82ba-083df5db3864" (UID: "a7d607da-5923-4ef5-82ba-083df5db3864"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.503149 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.503183 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.503195 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rb8m\" (UniqueName: \"kubernetes.io/projected/a7d607da-5923-4ef5-82ba-083df5db3864-kube-api-access-7rb8m\") on node \"crc\" DevicePath \"\"" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.859056 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" event={"ID":"a7d607da-5923-4ef5-82ba-083df5db3864","Type":"ContainerDied","Data":"6029110f7b585297305a3e0da33da93e6ca65c743da75995e51c90909b2039a8"} Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.859417 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6029110f7b585297305a3e0da33da93e6ca65c743da75995e51c90909b2039a8" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.859121 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.933630 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp"] Jan 30 08:58:28 crc kubenswrapper[4758]: E0130 08:58:28.934291 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7d607da-5923-4ef5-82ba-083df5db3864" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.934368 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7d607da-5923-4ef5-82ba-083df5db3864" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.934662 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7d607da-5923-4ef5-82ba-083df5db3864" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.935472 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.939654 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.939653 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.940013 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.940739 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.944370 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp"] Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.115100 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.116240 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.116599 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvktt\" (UniqueName: \"kubernetes.io/projected/64e0d966-7ff9-4dd8-97c0-660cde10793b-kube-api-access-vvktt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.116862 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.217907 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.217968 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.218016 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.218100 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvktt\" (UniqueName: \"kubernetes.io/projected/64e0d966-7ff9-4dd8-97c0-660cde10793b-kube-api-access-vvktt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.221992 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.222549 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.230080 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.247005 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvktt\" (UniqueName: \"kubernetes.io/projected/64e0d966-7ff9-4dd8-97c0-660cde10793b-kube-api-access-vvktt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.258921 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.803103 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp"] Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.868519 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" event={"ID":"64e0d966-7ff9-4dd8-97c0-660cde10793b","Type":"ContainerStarted","Data":"25a2f1bc96743755a476b5e1b766c3ea8d8ac8c33ead2cf689d320248e522ec6"} Jan 30 08:58:30 crc kubenswrapper[4758]: I0130 08:58:30.880852 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" event={"ID":"64e0d966-7ff9-4dd8-97c0-660cde10793b","Type":"ContainerStarted","Data":"b2b90750683b86930522f155931531da5a2fe961e31b6ee7ce56016bc1eeeff0"} Jan 30 08:58:30 crc kubenswrapper[4758]: I0130 08:58:30.904227 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" podStartSLOduration=2.465470447 podStartE2EDuration="2.904208426s" podCreationTimestamp="2026-01-30 08:58:28 +0000 UTC" firstStartedPulling="2026-01-30 08:58:29.822229956 +0000 UTC m=+1714.794541507" lastFinishedPulling="2026-01-30 08:58:30.260967945 +0000 UTC m=+1715.233279486" observedRunningTime="2026-01-30 08:58:30.899790527 +0000 UTC m=+1715.872102078" watchObservedRunningTime="2026-01-30 08:58:30.904208426 +0000 UTC m=+1715.876519977" Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.058090 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-vv7kf"] Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.066815 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d642-account-create-update-zwktw"] Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.076492 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-vv7kf"] Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.097152 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8r78c"] Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.109699 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2wfwb"] Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.120591 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d642-account-create-update-zwktw"] Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.128996 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2wfwb"] Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.136969 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8r78c"] Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.779734 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f506e0d-da4f-4243-923b-f7e102fafd92" path="/var/lib/kubelet/pods/2f506e0d-da4f-4243-923b-f7e102fafd92/volumes" Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.780378 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="418d35e8-ad4e-4051-a0f1-fc9179300441" path="/var/lib/kubelet/pods/418d35e8-ad4e-4051-a0f1-fc9179300441/volumes" Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.781796 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="585a6b3e-695a-4ddc-91a6-9b39b241ffd0" path="/var/lib/kubelet/pods/585a6b3e-695a-4ddc-91a6-9b39b241ffd0/volumes" Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.783132 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a04121a4-3ab4-48c5-903e-8d0002771ff7" path="/var/lib/kubelet/pods/a04121a4-3ab4-48c5-903e-8d0002771ff7/volumes" Jan 30 08:58:36 crc kubenswrapper[4758]: I0130 08:58:36.037418 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-50ef-account-create-update-bskhv"] Jan 30 08:58:36 crc kubenswrapper[4758]: I0130 08:58:36.046303 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-72be-account-create-update-gphrs"] Jan 30 08:58:36 crc kubenswrapper[4758]: I0130 08:58:36.055696 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-50ef-account-create-update-bskhv"] Jan 30 08:58:36 crc kubenswrapper[4758]: I0130 08:58:36.064989 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-72be-account-create-update-gphrs"] Jan 30 08:58:37 crc kubenswrapper[4758]: I0130 08:58:37.804429 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e9c3b23-6678-42de-923a-27ebb5fb61a3" path="/var/lib/kubelet/pods/1e9c3b23-6678-42de-923a-27ebb5fb61a3/volumes" Jan 30 08:58:37 crc kubenswrapper[4758]: I0130 08:58:37.806916 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2121970a-59f7-4943-ac1e-fa675e5eef8e" path="/var/lib/kubelet/pods/2121970a-59f7-4943-ac1e-fa675e5eef8e/volumes" Jan 30 08:58:39 crc kubenswrapper[4758]: I0130 08:58:39.769027 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:58:39 crc kubenswrapper[4758]: E0130 08:58:39.769499 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:58:43 crc kubenswrapper[4758]: I0130 08:58:43.042245 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5vdm5"] Jan 30 08:58:43 crc kubenswrapper[4758]: I0130 08:58:43.051299 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5vdm5"] Jan 30 08:58:43 crc kubenswrapper[4758]: I0130 08:58:43.781162 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78535c40-59db-4a0e-bcb9-4bae7e92548c" path="/var/lib/kubelet/pods/78535c40-59db-4a0e-bcb9-4bae7e92548c/volumes" Jan 30 08:58:53 crc kubenswrapper[4758]: I0130 08:58:53.769775 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:58:53 crc kubenswrapper[4758]: E0130 08:58:53.770591 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:59:04 crc kubenswrapper[4758]: I0130 08:59:04.034300 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-cgr7c"] Jan 30 08:59:04 crc kubenswrapper[4758]: I0130 08:59:04.044853 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-cgr7c"] Jan 30 08:59:05 crc kubenswrapper[4758]: I0130 08:59:05.025666 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-2rzkq"] Jan 30 08:59:05 crc kubenswrapper[4758]: I0130 08:59:05.033777 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-2rzkq"] Jan 30 08:59:05 crc kubenswrapper[4758]: I0130 08:59:05.784970 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b87c6fc-6491-419a-96bb-9542edb1e8aa" path="/var/lib/kubelet/pods/6b87c6fc-6491-419a-96bb-9542edb1e8aa/volumes" Jan 30 08:59:05 crc kubenswrapper[4758]: I0130 08:59:05.786731 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30" path="/var/lib/kubelet/pods/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30/volumes" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.118292 4758 scope.go:117] "RemoveContainer" containerID="79153cef375b67170413da87462ce99cc5566739d6bdf2f6c1513a6b0ebe4db9" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.161214 4758 scope.go:117] "RemoveContainer" containerID="a6e4fb8d0b259571728fd753067c06a5a86fb0519ae153af2e5b3b44c3d06d59" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.224236 4758 scope.go:117] "RemoveContainer" containerID="26c6c01e8de6266c91045a1615b456f02f5eaff630949bebc138b39fd169f580" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.266506 4758 scope.go:117] "RemoveContainer" containerID="1b0811ae6d53b97ab814b4974446d3583f0018b01ea502fb69edd27ab1156add" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.333887 4758 scope.go:117] "RemoveContainer" containerID="6c4d4c33fc49af87a76aa96b41f58bb62854886d94fd3aadaddede31f4e5c0ed" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.359837 4758 scope.go:117] "RemoveContainer" containerID="1d16e0f46d09a180bfaf46bb185c0c0c5a4a301473761aa6a905b4411c7a3f20" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.405813 4758 scope.go:117] "RemoveContainer" containerID="defa08629f2c819116b155489e34ad3fab9a107ee7bcfc7941be06048d203e56" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.425258 4758 scope.go:117] "RemoveContainer" containerID="daf0e92dedcf6f1f855f17b37c2a9bda8d265d7523eda70616bc5f00569e869a" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.454181 4758 scope.go:117] "RemoveContainer" containerID="aabb12ec35a8ae64e19c55b39db2dfd8844c3fe734c3d7e0940dfeeffb34c9d1" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.769461 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:59:08 crc kubenswrapper[4758]: E0130 08:59:08.769833 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.063792 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-feed-account-create-update-qcmp4"] Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.079383 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-495a-account-create-update-k2qtr"] Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.087275 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-bstt2"] Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.096085 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-bstt2"] Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.104012 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-feed-account-create-update-qcmp4"] Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.112637 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5131-account-create-update-tbcx5"] Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.120965 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-495a-account-create-update-k2qtr"] Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.128601 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5131-account-create-update-tbcx5"] Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.779160 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58afb54a-7b57-42bc-af7c-13db0bfd1580" path="/var/lib/kubelet/pods/58afb54a-7b57-42bc-af7c-13db0bfd1580/volumes" Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.781970 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a01921f-9f43-471a-bbd3-0a7e9bab364e" path="/var/lib/kubelet/pods/8a01921f-9f43-471a-bbd3-0a7e9bab364e/volumes" Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.783232 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c85c22cc-99a7-4939-904c-bffa8f2d5457" path="/var/lib/kubelet/pods/c85c22cc-99a7-4939-904c-bffa8f2d5457/volumes" Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.785683 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4bedf0c-06c1-4eaa-b731-3b8c2438456d" path="/var/lib/kubelet/pods/f4bedf0c-06c1-4eaa-b731-3b8c2438456d/volumes" Jan 30 08:59:12 crc kubenswrapper[4758]: I0130 08:59:12.032193 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kc8cg"] Jan 30 08:59:12 crc kubenswrapper[4758]: I0130 08:59:12.040932 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kc8cg"] Jan 30 08:59:13 crc kubenswrapper[4758]: I0130 08:59:13.782020 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2e127b5-11e2-40a6-8389-a9d08b8cae4f" path="/var/lib/kubelet/pods/d2e127b5-11e2-40a6-8389-a9d08b8cae4f/volumes" Jan 30 08:59:15 crc kubenswrapper[4758]: I0130 08:59:15.041137 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-6cqnq"] Jan 30 08:59:15 crc kubenswrapper[4758]: I0130 08:59:15.052329 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-6cqnq"] Jan 30 08:59:15 crc kubenswrapper[4758]: I0130 08:59:15.783317 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5665f568-1d3e-48dc-8a32-7bd9ad02a037" path="/var/lib/kubelet/pods/5665f568-1d3e-48dc-8a32-7bd9ad02a037/volumes" Jan 30 08:59:22 crc kubenswrapper[4758]: I0130 08:59:22.769417 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:59:22 crc kubenswrapper[4758]: E0130 08:59:22.770716 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:59:32 crc kubenswrapper[4758]: I0130 08:59:32.820275 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:59:32 crc kubenswrapper[4758]: E0130 08:59:32.821757 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:59:44 crc kubenswrapper[4758]: I0130 08:59:44.769956 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:59:44 crc kubenswrapper[4758]: E0130 08:59:44.771425 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:59:51 crc kubenswrapper[4758]: I0130 08:59:51.041482 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-z5hrj"] Jan 30 08:59:51 crc kubenswrapper[4758]: I0130 08:59:51.048201 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-z5hrj"] Jan 30 08:59:51 crc kubenswrapper[4758]: I0130 08:59:51.781318 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b166e095-ba6b-443f-8c0a-0e83bb698ccd" path="/var/lib/kubelet/pods/b166e095-ba6b-443f-8c0a-0e83bb698ccd/volumes" Jan 30 08:59:55 crc kubenswrapper[4758]: I0130 08:59:55.775271 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:59:55 crc kubenswrapper[4758]: E0130 08:59:55.776951 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.151828 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx"] Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.153823 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.159113 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.159676 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.170761 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx"] Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.180163 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8b22039-92b3-4488-a971-2913dedb64ed-secret-volume\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.180271 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8b22039-92b3-4488-a971-2913dedb64ed-config-volume\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.180359 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptrfh\" (UniqueName: \"kubernetes.io/projected/e8b22039-92b3-4488-a971-2913dedb64ed-kube-api-access-ptrfh\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.282586 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8b22039-92b3-4488-a971-2913dedb64ed-secret-volume\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.282678 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8b22039-92b3-4488-a971-2913dedb64ed-config-volume\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.282717 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrfh\" (UniqueName: \"kubernetes.io/projected/e8b22039-92b3-4488-a971-2913dedb64ed-kube-api-access-ptrfh\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.283899 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8b22039-92b3-4488-a971-2913dedb64ed-config-volume\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.296295 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8b22039-92b3-4488-a971-2913dedb64ed-secret-volume\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.299952 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptrfh\" (UniqueName: \"kubernetes.io/projected/e8b22039-92b3-4488-a971-2913dedb64ed-kube-api-access-ptrfh\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.476207 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.906637 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx"] Jan 30 09:00:01 crc kubenswrapper[4758]: I0130 09:00:01.746666 4758 generic.go:334] "Generic (PLEG): container finished" podID="e8b22039-92b3-4488-a971-2913dedb64ed" containerID="04e032ddde7c1d4a35b13c1c04e206e7ca345c709ebf2a8892f46a64701c926c" exitCode=0 Jan 30 09:00:01 crc kubenswrapper[4758]: I0130 09:00:01.746868 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" event={"ID":"e8b22039-92b3-4488-a971-2913dedb64ed","Type":"ContainerDied","Data":"04e032ddde7c1d4a35b13c1c04e206e7ca345c709ebf2a8892f46a64701c926c"} Jan 30 09:00:01 crc kubenswrapper[4758]: I0130 09:00:01.747243 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" event={"ID":"e8b22039-92b3-4488-a971-2913dedb64ed","Type":"ContainerStarted","Data":"b182bf0b41a1913720a2b1bb4645bfa412a564e1599ae5bd1baa8a674af75d72"} Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.149973 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.253014 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8b22039-92b3-4488-a971-2913dedb64ed-secret-volume\") pod \"e8b22039-92b3-4488-a971-2913dedb64ed\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.253257 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptrfh\" (UniqueName: \"kubernetes.io/projected/e8b22039-92b3-4488-a971-2913dedb64ed-kube-api-access-ptrfh\") pod \"e8b22039-92b3-4488-a971-2913dedb64ed\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.253304 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8b22039-92b3-4488-a971-2913dedb64ed-config-volume\") pod \"e8b22039-92b3-4488-a971-2913dedb64ed\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.254361 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8b22039-92b3-4488-a971-2913dedb64ed-config-volume" (OuterVolumeSpecName: "config-volume") pod "e8b22039-92b3-4488-a971-2913dedb64ed" (UID: "e8b22039-92b3-4488-a971-2913dedb64ed"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.260271 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b22039-92b3-4488-a971-2913dedb64ed-kube-api-access-ptrfh" (OuterVolumeSpecName: "kube-api-access-ptrfh") pod "e8b22039-92b3-4488-a971-2913dedb64ed" (UID: "e8b22039-92b3-4488-a971-2913dedb64ed"). InnerVolumeSpecName "kube-api-access-ptrfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.261151 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b22039-92b3-4488-a971-2913dedb64ed-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e8b22039-92b3-4488-a971-2913dedb64ed" (UID: "e8b22039-92b3-4488-a971-2913dedb64ed"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.356082 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8b22039-92b3-4488-a971-2913dedb64ed-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.356124 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptrfh\" (UniqueName: \"kubernetes.io/projected/e8b22039-92b3-4488-a971-2913dedb64ed-kube-api-access-ptrfh\") on node \"crc\" DevicePath \"\"" Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.356134 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8b22039-92b3-4488-a971-2913dedb64ed-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.766438 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" event={"ID":"e8b22039-92b3-4488-a971-2913dedb64ed","Type":"ContainerDied","Data":"b182bf0b41a1913720a2b1bb4645bfa412a564e1599ae5bd1baa8a674af75d72"} Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.766515 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b182bf0b41a1913720a2b1bb4645bfa412a564e1599ae5bd1baa8a674af75d72" Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.766515 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:04 crc kubenswrapper[4758]: I0130 09:00:04.042567 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-lqbmm"] Jan 30 09:00:04 crc kubenswrapper[4758]: I0130 09:00:04.051505 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-lqbmm"] Jan 30 09:00:05 crc kubenswrapper[4758]: I0130 09:00:05.779139 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11f7c236-867a-465b-9514-de6a765b312b" path="/var/lib/kubelet/pods/11f7c236-867a-465b-9514-de6a765b312b/volumes" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.608458 4758 scope.go:117] "RemoveContainer" containerID="a887d904bf88f7531d24fd0632b3980599f13f39af4a57d591d1cab59676a5bb" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.646547 4758 scope.go:117] "RemoveContainer" containerID="de4f25edc76809fd476b4ddf2f6fed1b04afc6e58967e48dd869ed6a37fbb265" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.694467 4758 scope.go:117] "RemoveContainer" containerID="23862bcdc0458af24e0606a4eedaf48106778403061437cff21803ebeee27a94" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.727470 4758 scope.go:117] "RemoveContainer" containerID="1513688fede19d670ec6826188636f5ccd6bae62325cf06859976dc36b250613" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.768867 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 09:00:08 crc kubenswrapper[4758]: E0130 09:00:08.769280 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.784917 4758 scope.go:117] "RemoveContainer" containerID="b2ff6cf87c18064183ff2ca83818d1799a9ee6c49eb71a91d7e28d365a9e731e" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.814896 4758 scope.go:117] "RemoveContainer" containerID="2906f8c6c80f27b1b5d6a346bfa1c2bccd0d8111cf2311c1ea9793ee001e6172" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.873524 4758 scope.go:117] "RemoveContainer" containerID="d070fa2fc47120d90dd29347010dd2bd317f21d5673f4354507acac0096009be" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.893809 4758 scope.go:117] "RemoveContainer" containerID="edfcbb4ee1b893b2f19bb3b08ec37186fb229027e460be6e17c73cf2a49baec0" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.915241 4758 scope.go:117] "RemoveContainer" containerID="0ae912bbbb171da1791fe8eb0cee80e9a7d55f62417065d9e913877b45490451" Jan 30 09:00:12 crc kubenswrapper[4758]: I0130 09:00:12.033378 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-slg8b"] Jan 30 09:00:12 crc kubenswrapper[4758]: I0130 09:00:12.042944 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-slg8b"] Jan 30 09:00:13 crc kubenswrapper[4758]: I0130 09:00:13.780108 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="419e16c4-297d-490a-8fd3-6d365e20f5f2" path="/var/lib/kubelet/pods/419e16c4-297d-490a-8fd3-6d365e20f5f2/volumes" Jan 30 09:00:22 crc kubenswrapper[4758]: I0130 09:00:22.769012 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 09:00:23 crc kubenswrapper[4758]: I0130 09:00:23.954298 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"a4e6da0cc99379149fae03030aa7bdc7f1cee2816f3c69e88906bfdf6d6dfda0"} Jan 30 09:00:24 crc kubenswrapper[4758]: I0130 09:00:24.064337 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-c24lg"] Jan 30 09:00:24 crc kubenswrapper[4758]: I0130 09:00:24.076004 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-c24lg"] Jan 30 09:00:25 crc kubenswrapper[4758]: I0130 09:00:25.061127 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-x2v6d"] Jan 30 09:00:25 crc kubenswrapper[4758]: I0130 09:00:25.069530 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-x2v6d"] Jan 30 09:00:25 crc kubenswrapper[4758]: I0130 09:00:25.779624 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12ff21aa-edae-4f56-a2ea-be0deb2d84d7" path="/var/lib/kubelet/pods/12ff21aa-edae-4f56-a2ea-be0deb2d84d7/volumes" Jan 30 09:00:25 crc kubenswrapper[4758]: I0130 09:00:25.781484 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" path="/var/lib/kubelet/pods/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3/volumes" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.158100 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496061-prvzt"] Jan 30 09:01:00 crc kubenswrapper[4758]: E0130 09:01:00.158971 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b22039-92b3-4488-a971-2913dedb64ed" containerName="collect-profiles" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.158984 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b22039-92b3-4488-a971-2913dedb64ed" containerName="collect-profiles" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.159242 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b22039-92b3-4488-a971-2913dedb64ed" containerName="collect-profiles" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.159832 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.215355 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496061-prvzt"] Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.317846 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-config-data\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.318194 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-combined-ca-bundle\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.318455 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8jwd\" (UniqueName: \"kubernetes.io/projected/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-kube-api-access-d8jwd\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.318675 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-fernet-keys\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.421114 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-config-data\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.421224 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-combined-ca-bundle\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.421267 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8jwd\" (UniqueName: \"kubernetes.io/projected/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-kube-api-access-d8jwd\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.421315 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-fernet-keys\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.427864 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-fernet-keys\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.431205 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-config-data\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.431984 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-combined-ca-bundle\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.442836 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8jwd\" (UniqueName: \"kubernetes.io/projected/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-kube-api-access-d8jwd\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.482741 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.952845 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496061-prvzt"] Jan 30 09:01:01 crc kubenswrapper[4758]: I0130 09:01:01.720898 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496061-prvzt" event={"ID":"c5ff7966-87c8-4b9b-8520-a05c3b5d252d","Type":"ContainerStarted","Data":"31c39afda45cb5bab6823cada480f84df8f6a4c26df945f64901b847cbbf0edb"} Jan 30 09:01:01 crc kubenswrapper[4758]: I0130 09:01:01.721243 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496061-prvzt" event={"ID":"c5ff7966-87c8-4b9b-8520-a05c3b5d252d","Type":"ContainerStarted","Data":"54ee80781f361f9610a63542584fb45e2211622231ba8ebf996592a73513e867"} Jan 30 09:01:02 crc kubenswrapper[4758]: I0130 09:01:02.754497 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496061-prvzt" podStartSLOduration=2.754476606 podStartE2EDuration="2.754476606s" podCreationTimestamp="2026-01-30 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 09:01:02.748638094 +0000 UTC m=+1867.720949655" watchObservedRunningTime="2026-01-30 09:01:02.754476606 +0000 UTC m=+1867.726788157" Jan 30 09:01:06 crc kubenswrapper[4758]: I0130 09:01:06.764346 4758 generic.go:334] "Generic (PLEG): container finished" podID="c5ff7966-87c8-4b9b-8520-a05c3b5d252d" containerID="31c39afda45cb5bab6823cada480f84df8f6a4c26df945f64901b847cbbf0edb" exitCode=0 Jan 30 09:01:06 crc kubenswrapper[4758]: I0130 09:01:06.764503 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496061-prvzt" event={"ID":"c5ff7966-87c8-4b9b-8520-a05c3b5d252d","Type":"ContainerDied","Data":"31c39afda45cb5bab6823cada480f84df8f6a4c26df945f64901b847cbbf0edb"} Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.139537 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.171482 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-fernet-keys\") pod \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.171700 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8jwd\" (UniqueName: \"kubernetes.io/projected/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-kube-api-access-d8jwd\") pod \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.171771 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-combined-ca-bundle\") pod \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.171802 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-config-data\") pod \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.189314 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-kube-api-access-d8jwd" (OuterVolumeSpecName: "kube-api-access-d8jwd") pod "c5ff7966-87c8-4b9b-8520-a05c3b5d252d" (UID: "c5ff7966-87c8-4b9b-8520-a05c3b5d252d"). InnerVolumeSpecName "kube-api-access-d8jwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.189321 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c5ff7966-87c8-4b9b-8520-a05c3b5d252d" (UID: "c5ff7966-87c8-4b9b-8520-a05c3b5d252d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.212695 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5ff7966-87c8-4b9b-8520-a05c3b5d252d" (UID: "c5ff7966-87c8-4b9b-8520-a05c3b5d252d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.252285 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-config-data" (OuterVolumeSpecName: "config-data") pod "c5ff7966-87c8-4b9b-8520-a05c3b5d252d" (UID: "c5ff7966-87c8-4b9b-8520-a05c3b5d252d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.284332 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.284370 4758 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.284386 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8jwd\" (UniqueName: \"kubernetes.io/projected/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-kube-api-access-d8jwd\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.284402 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.783366 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496061-prvzt" event={"ID":"c5ff7966-87c8-4b9b-8520-a05c3b5d252d","Type":"ContainerDied","Data":"54ee80781f361f9610a63542584fb45e2211622231ba8ebf996592a73513e867"} Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.783410 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54ee80781f361f9610a63542584fb45e2211622231ba8ebf996592a73513e867" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.783416 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:09 crc kubenswrapper[4758]: I0130 09:01:09.085665 4758 scope.go:117] "RemoveContainer" containerID="c783ba41ba97d546267a2a7d551d57bf2d634c9498d0ec5c9365e656091583d9" Jan 30 09:01:09 crc kubenswrapper[4758]: I0130 09:01:09.144811 4758 scope.go:117] "RemoveContainer" containerID="c95e7220fa0d1725e338904a7e29c2f7e1c50ca5a270bfbf0b0819abddbe5c04" Jan 30 09:01:09 crc kubenswrapper[4758]: I0130 09:01:09.226786 4758 scope.go:117] "RemoveContainer" containerID="ce042854a2f3fd058ffc8182aa4289802d19b49b10f8800c0daceab9556857f3" Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.113453 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-7afc-account-create-update-4gfp5"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.134221 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-649b-account-create-update-fjrbb"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.144895 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7kv6r"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.155215 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-psn47"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.164559 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-7afc-account-create-update-4gfp5"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.177917 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-8mvw7"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.188915 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-4e24-account-create-update-pkqqp"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.200497 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-649b-account-create-update-fjrbb"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.211810 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7kv6r"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.255338 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-4e24-account-create-update-pkqqp"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.262552 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-psn47"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.269964 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-8mvw7"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.783308 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d2dbd4c-2c5d-4865-8cf2-ce663e060369" path="/var/lib/kubelet/pods/1d2dbd4c-2c5d-4865-8cf2-ce663e060369/volumes" Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.784590 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="227bbca8-b963-4b39-af28-ac9dbf50bc73" path="/var/lib/kubelet/pods/227bbca8-b963-4b39-af28-ac9dbf50bc73/volumes" Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.786396 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a7e2817-2850-4d53-8ebf-4977eea68664" path="/var/lib/kubelet/pods/6a7e2817-2850-4d53-8ebf-4977eea68664/volumes" Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.787600 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad" path="/var/lib/kubelet/pods/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad/volumes" Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.789433 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be7372d3-e1b3-4621-a40b-9b09fd3d7a3b" path="/var/lib/kubelet/pods/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b/volumes" Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.790382 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8" path="/var/lib/kubelet/pods/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8/volumes" Jan 30 09:01:41 crc kubenswrapper[4758]: I0130 09:01:41.067827 4758 generic.go:334] "Generic (PLEG): container finished" podID="64e0d966-7ff9-4dd8-97c0-660cde10793b" containerID="b2b90750683b86930522f155931531da5a2fe961e31b6ee7ce56016bc1eeeff0" exitCode=0 Jan 30 09:01:41 crc kubenswrapper[4758]: I0130 09:01:41.067952 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" event={"ID":"64e0d966-7ff9-4dd8-97c0-660cde10793b","Type":"ContainerDied","Data":"b2b90750683b86930522f155931531da5a2fe961e31b6ee7ce56016bc1eeeff0"} Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.581446 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.743067 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-ssh-key-openstack-edpm-ipam\") pod \"64e0d966-7ff9-4dd8-97c0-660cde10793b\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.743793 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-inventory\") pod \"64e0d966-7ff9-4dd8-97c0-660cde10793b\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.743959 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-bootstrap-combined-ca-bundle\") pod \"64e0d966-7ff9-4dd8-97c0-660cde10793b\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.744214 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvktt\" (UniqueName: \"kubernetes.io/projected/64e0d966-7ff9-4dd8-97c0-660cde10793b-kube-api-access-vvktt\") pod \"64e0d966-7ff9-4dd8-97c0-660cde10793b\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.750261 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64e0d966-7ff9-4dd8-97c0-660cde10793b-kube-api-access-vvktt" (OuterVolumeSpecName: "kube-api-access-vvktt") pod "64e0d966-7ff9-4dd8-97c0-660cde10793b" (UID: "64e0d966-7ff9-4dd8-97c0-660cde10793b"). InnerVolumeSpecName "kube-api-access-vvktt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.752261 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "64e0d966-7ff9-4dd8-97c0-660cde10793b" (UID: "64e0d966-7ff9-4dd8-97c0-660cde10793b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.775934 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "64e0d966-7ff9-4dd8-97c0-660cde10793b" (UID: "64e0d966-7ff9-4dd8-97c0-660cde10793b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.779686 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-inventory" (OuterVolumeSpecName: "inventory") pod "64e0d966-7ff9-4dd8-97c0-660cde10793b" (UID: "64e0d966-7ff9-4dd8-97c0-660cde10793b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.846397 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.846436 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.846449 4758 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.846458 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvktt\" (UniqueName: \"kubernetes.io/projected/64e0d966-7ff9-4dd8-97c0-660cde10793b-kube-api-access-vvktt\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.092247 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" event={"ID":"64e0d966-7ff9-4dd8-97c0-660cde10793b","Type":"ContainerDied","Data":"25a2f1bc96743755a476b5e1b766c3ea8d8ac8c33ead2cf689d320248e522ec6"} Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.092319 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25a2f1bc96743755a476b5e1b766c3ea8d8ac8c33ead2cf689d320248e522ec6" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.092332 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.203396 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w"] Jan 30 09:01:43 crc kubenswrapper[4758]: E0130 09:01:43.204182 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ff7966-87c8-4b9b-8520-a05c3b5d252d" containerName="keystone-cron" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.204206 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ff7966-87c8-4b9b-8520-a05c3b5d252d" containerName="keystone-cron" Jan 30 09:01:43 crc kubenswrapper[4758]: E0130 09:01:43.204247 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e0d966-7ff9-4dd8-97c0-660cde10793b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.204259 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e0d966-7ff9-4dd8-97c0-660cde10793b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.204635 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5ff7966-87c8-4b9b-8520-a05c3b5d252d" containerName="keystone-cron" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.204664 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="64e0d966-7ff9-4dd8-97c0-660cde10793b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.205491 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.211232 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.211500 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.211756 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.212536 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.224875 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w"] Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.357995 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v79ns\" (UniqueName: \"kubernetes.io/projected/103d82ed-724d-4545-9b1c-04633d68c1ef-kube-api-access-v79ns\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.358080 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.358134 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.465508 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v79ns\" (UniqueName: \"kubernetes.io/projected/103d82ed-724d-4545-9b1c-04633d68c1ef-kube-api-access-v79ns\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.465648 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.465798 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.471789 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.479548 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.488572 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v79ns\" (UniqueName: \"kubernetes.io/projected/103d82ed-724d-4545-9b1c-04633d68c1ef-kube-api-access-v79ns\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.525487 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:44 crc kubenswrapper[4758]: I0130 09:01:44.029703 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w"] Jan 30 09:01:44 crc kubenswrapper[4758]: I0130 09:01:44.104195 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" event={"ID":"103d82ed-724d-4545-9b1c-04633d68c1ef","Type":"ContainerStarted","Data":"dc8c698a7e3cc8b7b9e80c909e94c993ae488290acead9874c7b8920f841aaf0"} Jan 30 09:01:45 crc kubenswrapper[4758]: I0130 09:01:45.114355 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" event={"ID":"103d82ed-724d-4545-9b1c-04633d68c1ef","Type":"ContainerStarted","Data":"84be8ae0cf87735785390fb892a6fee69f14471016eb2f92e7cf490645bad8b1"} Jan 30 09:01:45 crc kubenswrapper[4758]: I0130 09:01:45.140718 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" podStartSLOduration=1.6518071060000001 podStartE2EDuration="2.1406631s" podCreationTimestamp="2026-01-30 09:01:43 +0000 UTC" firstStartedPulling="2026-01-30 09:01:44.037974403 +0000 UTC m=+1909.010285954" lastFinishedPulling="2026-01-30 09:01:44.526830397 +0000 UTC m=+1909.499141948" observedRunningTime="2026-01-30 09:01:45.129063263 +0000 UTC m=+1910.101374824" watchObservedRunningTime="2026-01-30 09:01:45.1406631 +0000 UTC m=+1910.112974661" Jan 30 09:02:05 crc kubenswrapper[4758]: I0130 09:02:05.042166 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-chr6l"] Jan 30 09:02:05 crc kubenswrapper[4758]: I0130 09:02:05.054121 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-chr6l"] Jan 30 09:02:05 crc kubenswrapper[4758]: I0130 09:02:05.780552 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06a33948-1e21-49dd-9f48-b4c188ae6e9d" path="/var/lib/kubelet/pods/06a33948-1e21-49dd-9f48-b4c188ae6e9d/volumes" Jan 30 09:02:09 crc kubenswrapper[4758]: I0130 09:02:09.341098 4758 scope.go:117] "RemoveContainer" containerID="638aaa1aba025b2a9d201ffacd37903c3bef07833c13c4c8b9d80743c4260d8f" Jan 30 09:02:09 crc kubenswrapper[4758]: I0130 09:02:09.398634 4758 scope.go:117] "RemoveContainer" containerID="6b5f2a511f68dead3ed1b1b92615f5c6856551f3b3f434c8b9484d4f99758a12" Jan 30 09:02:09 crc kubenswrapper[4758]: I0130 09:02:09.434335 4758 scope.go:117] "RemoveContainer" containerID="67a4a1af734c4adf4de193490eeca87888430ce68cfa2191062d7093ddc838ba" Jan 30 09:02:09 crc kubenswrapper[4758]: I0130 09:02:09.475723 4758 scope.go:117] "RemoveContainer" containerID="4f8803add769de9f15a23dbfbf03688310d1c6b3d5c93d79adba46cb16dec5ca" Jan 30 09:02:09 crc kubenswrapper[4758]: I0130 09:02:09.513048 4758 scope.go:117] "RemoveContainer" containerID="2a553d18e8c8709e9420435d0699cca30262f29d812796ba71700ab0845f9d1c" Jan 30 09:02:09 crc kubenswrapper[4758]: I0130 09:02:09.557413 4758 scope.go:117] "RemoveContainer" containerID="54d62ade615f4d66aac7ea9231240e05dbce2b51363637c91db0cd5147899e3f" Jan 30 09:02:09 crc kubenswrapper[4758]: I0130 09:02:09.605307 4758 scope.go:117] "RemoveContainer" containerID="e022a3d99ce24d9dd10ef619189c15979640758b052ecd93de5065aa03c1b11a" Jan 30 09:02:22 crc kubenswrapper[4758]: I0130 09:02:22.388001 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:02:22 crc kubenswrapper[4758]: I0130 09:02:22.388557 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:02:52 crc kubenswrapper[4758]: I0130 09:02:52.387732 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:02:52 crc kubenswrapper[4758]: I0130 09:02:52.388468 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:03:13 crc kubenswrapper[4758]: I0130 09:03:13.039891 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-xp8d6"] Jan 30 09:03:13 crc kubenswrapper[4758]: I0130 09:03:13.050426 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-xp8d6"] Jan 30 09:03:13 crc kubenswrapper[4758]: I0130 09:03:13.778591 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c7907da-98f4-46c2-9089-7227516cf739" path="/var/lib/kubelet/pods/9c7907da-98f4-46c2-9089-7227516cf739/volumes" Jan 30 09:03:20 crc kubenswrapper[4758]: I0130 09:03:20.048680 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-95vnq"] Jan 30 09:03:20 crc kubenswrapper[4758]: I0130 09:03:20.058224 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-95vnq"] Jan 30 09:03:21 crc kubenswrapper[4758]: I0130 09:03:21.780180 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42581858-ead1-4898-9fad-72411bf3c6a4" path="/var/lib/kubelet/pods/42581858-ead1-4898-9fad-72411bf3c6a4/volumes" Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.387027 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.387368 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.387413 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.388085 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a4e6da0cc99379149fae03030aa7bdc7f1cee2816f3c69e88906bfdf6d6dfda0"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.388146 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://a4e6da0cc99379149fae03030aa7bdc7f1cee2816f3c69e88906bfdf6d6dfda0" gracePeriod=600 Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.961842 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="a4e6da0cc99379149fae03030aa7bdc7f1cee2816f3c69e88906bfdf6d6dfda0" exitCode=0 Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.961899 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"a4e6da0cc99379149fae03030aa7bdc7f1cee2816f3c69e88906bfdf6d6dfda0"} Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.962370 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41"} Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.962404 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 09:03:24 crc kubenswrapper[4758]: I0130 09:03:24.980938 4758 generic.go:334] "Generic (PLEG): container finished" podID="103d82ed-724d-4545-9b1c-04633d68c1ef" containerID="84be8ae0cf87735785390fb892a6fee69f14471016eb2f92e7cf490645bad8b1" exitCode=0 Jan 30 09:03:24 crc kubenswrapper[4758]: I0130 09:03:24.981063 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" event={"ID":"103d82ed-724d-4545-9b1c-04633d68c1ef","Type":"ContainerDied","Data":"84be8ae0cf87735785390fb892a6fee69f14471016eb2f92e7cf490645bad8b1"} Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.449228 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.587000 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-inventory\") pod \"103d82ed-724d-4545-9b1c-04633d68c1ef\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.587240 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-ssh-key-openstack-edpm-ipam\") pod \"103d82ed-724d-4545-9b1c-04633d68c1ef\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.587354 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v79ns\" (UniqueName: \"kubernetes.io/projected/103d82ed-724d-4545-9b1c-04633d68c1ef-kube-api-access-v79ns\") pod \"103d82ed-724d-4545-9b1c-04633d68c1ef\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.594224 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/103d82ed-724d-4545-9b1c-04633d68c1ef-kube-api-access-v79ns" (OuterVolumeSpecName: "kube-api-access-v79ns") pod "103d82ed-724d-4545-9b1c-04633d68c1ef" (UID: "103d82ed-724d-4545-9b1c-04633d68c1ef"). InnerVolumeSpecName "kube-api-access-v79ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.621458 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-inventory" (OuterVolumeSpecName: "inventory") pod "103d82ed-724d-4545-9b1c-04633d68c1ef" (UID: "103d82ed-724d-4545-9b1c-04633d68c1ef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.622292 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "103d82ed-724d-4545-9b1c-04633d68c1ef" (UID: "103d82ed-724d-4545-9b1c-04633d68c1ef"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.689994 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.690092 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v79ns\" (UniqueName: \"kubernetes.io/projected/103d82ed-724d-4545-9b1c-04633d68c1ef-kube-api-access-v79ns\") on node \"crc\" DevicePath \"\"" Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.690108 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.001764 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" event={"ID":"103d82ed-724d-4545-9b1c-04633d68c1ef","Type":"ContainerDied","Data":"dc8c698a7e3cc8b7b9e80c909e94c993ae488290acead9874c7b8920f841aaf0"} Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.002141 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc8c698a7e3cc8b7b9e80c909e94c993ae488290acead9874c7b8920f841aaf0" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.001808 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.150843 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn"] Jan 30 09:03:27 crc kubenswrapper[4758]: E0130 09:03:27.151280 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103d82ed-724d-4545-9b1c-04633d68c1ef" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.151304 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="103d82ed-724d-4545-9b1c-04633d68c1ef" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.151561 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="103d82ed-724d-4545-9b1c-04633d68c1ef" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.152361 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.154740 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.155176 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.155376 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.155488 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.174648 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn"] Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.302343 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4dm2\" (UniqueName: \"kubernetes.io/projected/a282c0aa-8c3d-4a78-9fd6-1971701a1158-kube-api-access-c4dm2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.302690 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.302762 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.404058 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.404637 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.404920 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4dm2\" (UniqueName: \"kubernetes.io/projected/a282c0aa-8c3d-4a78-9fd6-1971701a1158-kube-api-access-c4dm2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.416199 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.417486 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.424701 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4dm2\" (UniqueName: \"kubernetes.io/projected/a282c0aa-8c3d-4a78-9fd6-1971701a1158-kube-api-access-c4dm2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.472976 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.975208 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn"] Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.980680 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:03:28 crc kubenswrapper[4758]: I0130 09:03:28.011419 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" event={"ID":"a282c0aa-8c3d-4a78-9fd6-1971701a1158","Type":"ContainerStarted","Data":"22d6485e6d1373e9c1fd9189afdc079b81c1850b7190e78e117073ff593293a2"} Jan 30 09:03:29 crc kubenswrapper[4758]: I0130 09:03:29.026821 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" event={"ID":"a282c0aa-8c3d-4a78-9fd6-1971701a1158","Type":"ContainerStarted","Data":"2c32173feef6cb9a2c38285cc27663ba19215a548a7eac139d2cd7b311a40fd7"} Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.045014 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" podStartSLOduration=31.417722064 podStartE2EDuration="32.044986043s" podCreationTimestamp="2026-01-30 09:03:27 +0000 UTC" firstStartedPulling="2026-01-30 09:03:27.980405998 +0000 UTC m=+2012.952717549" lastFinishedPulling="2026-01-30 09:03:28.607669977 +0000 UTC m=+2013.579981528" observedRunningTime="2026-01-30 09:03:29.05353199 +0000 UTC m=+2014.025843611" watchObservedRunningTime="2026-01-30 09:03:59.044986043 +0000 UTC m=+2044.017297604" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.049344 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-pmsnh"] Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.057926 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-pmsnh"] Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.632110 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cjxmq"] Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.634000 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.649098 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cjxmq"] Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.763108 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-catalog-content\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.763253 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc5gd\" (UniqueName: \"kubernetes.io/projected/0cd223a4-37c2-4e55-9516-bc48c435b50c-kube-api-access-nc5gd\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.763298 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-utilities\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.797836 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaef08b6-5771-4e63-90e2-f3eb803993ad" path="/var/lib/kubelet/pods/aaef08b6-5771-4e63-90e2-f3eb803993ad/volumes" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.865087 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-catalog-content\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.866367 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc5gd\" (UniqueName: \"kubernetes.io/projected/0cd223a4-37c2-4e55-9516-bc48c435b50c-kube-api-access-nc5gd\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.866707 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-utilities\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.865929 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-catalog-content\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.867017 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-utilities\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.891320 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc5gd\" (UniqueName: \"kubernetes.io/projected/0cd223a4-37c2-4e55-9516-bc48c435b50c-kube-api-access-nc5gd\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.955485 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:04:00 crc kubenswrapper[4758]: I0130 09:04:00.444803 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cjxmq"] Jan 30 09:04:01 crc kubenswrapper[4758]: I0130 09:04:01.313295 4758 generic.go:334] "Generic (PLEG): container finished" podID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerID="e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598" exitCode=0 Jan 30 09:04:01 crc kubenswrapper[4758]: I0130 09:04:01.313318 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxmq" event={"ID":"0cd223a4-37c2-4e55-9516-bc48c435b50c","Type":"ContainerDied","Data":"e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598"} Jan 30 09:04:01 crc kubenswrapper[4758]: I0130 09:04:01.313774 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxmq" event={"ID":"0cd223a4-37c2-4e55-9516-bc48c435b50c","Type":"ContainerStarted","Data":"604f2f2ff8b464eaacc934db9131008fb807f8f7723bad18c38405e8b17d4ba9"} Jan 30 09:04:03 crc kubenswrapper[4758]: I0130 09:04:03.331213 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxmq" event={"ID":"0cd223a4-37c2-4e55-9516-bc48c435b50c","Type":"ContainerStarted","Data":"ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0"} Jan 30 09:04:08 crc kubenswrapper[4758]: I0130 09:04:08.380891 4758 generic.go:334] "Generic (PLEG): container finished" podID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerID="ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0" exitCode=0 Jan 30 09:04:08 crc kubenswrapper[4758]: I0130 09:04:08.380973 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxmq" event={"ID":"0cd223a4-37c2-4e55-9516-bc48c435b50c","Type":"ContainerDied","Data":"ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0"} Jan 30 09:04:09 crc kubenswrapper[4758]: I0130 09:04:09.393999 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxmq" event={"ID":"0cd223a4-37c2-4e55-9516-bc48c435b50c","Type":"ContainerStarted","Data":"ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6"} Jan 30 09:04:09 crc kubenswrapper[4758]: I0130 09:04:09.792953 4758 scope.go:117] "RemoveContainer" containerID="83928b37f6975882ba286ca1fad93e1bc51d5667bdaabba8b964bdb9f2dfdcd4" Jan 30 09:04:09 crc kubenswrapper[4758]: I0130 09:04:09.834903 4758 scope.go:117] "RemoveContainer" containerID="fe37a0352772fb36e158de913042f5d07970aede7274a69f794e11be3129cf1b" Jan 30 09:04:09 crc kubenswrapper[4758]: I0130 09:04:09.898220 4758 scope.go:117] "RemoveContainer" containerID="87c0147985c0330380f0c62d6ffa802a17a1d7f52af945bd4b80ccd53693d21e" Jan 30 09:04:09 crc kubenswrapper[4758]: I0130 09:04:09.956931 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:04:09 crc kubenswrapper[4758]: I0130 09:04:09.956985 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:04:11 crc kubenswrapper[4758]: I0130 09:04:11.013575 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cjxmq" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="registry-server" probeResult="failure" output=< Jan 30 09:04:11 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:04:11 crc kubenswrapper[4758]: > Jan 30 09:04:20 crc kubenswrapper[4758]: I0130 09:04:20.004714 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:04:20 crc kubenswrapper[4758]: I0130 09:04:20.028387 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cjxmq" podStartSLOduration=13.522734383 podStartE2EDuration="21.028361516s" podCreationTimestamp="2026-01-30 09:03:59 +0000 UTC" firstStartedPulling="2026-01-30 09:04:01.31580972 +0000 UTC m=+2046.288121271" lastFinishedPulling="2026-01-30 09:04:08.821436853 +0000 UTC m=+2053.793748404" observedRunningTime="2026-01-30 09:04:09.419782586 +0000 UTC m=+2054.392094147" watchObservedRunningTime="2026-01-30 09:04:20.028361516 +0000 UTC m=+2065.000673067" Jan 30 09:04:20 crc kubenswrapper[4758]: I0130 09:04:20.057549 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:04:20 crc kubenswrapper[4758]: I0130 09:04:20.247186 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cjxmq"] Jan 30 09:04:21 crc kubenswrapper[4758]: I0130 09:04:21.491993 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cjxmq" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="registry-server" containerID="cri-o://ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6" gracePeriod=2 Jan 30 09:04:21 crc kubenswrapper[4758]: I0130 09:04:21.991022 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.013613 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-catalog-content\") pod \"0cd223a4-37c2-4e55-9516-bc48c435b50c\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.013709 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-utilities\") pod \"0cd223a4-37c2-4e55-9516-bc48c435b50c\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.013873 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc5gd\" (UniqueName: \"kubernetes.io/projected/0cd223a4-37c2-4e55-9516-bc48c435b50c-kube-api-access-nc5gd\") pod \"0cd223a4-37c2-4e55-9516-bc48c435b50c\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.016176 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-utilities" (OuterVolumeSpecName: "utilities") pod "0cd223a4-37c2-4e55-9516-bc48c435b50c" (UID: "0cd223a4-37c2-4e55-9516-bc48c435b50c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.035320 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cd223a4-37c2-4e55-9516-bc48c435b50c-kube-api-access-nc5gd" (OuterVolumeSpecName: "kube-api-access-nc5gd") pod "0cd223a4-37c2-4e55-9516-bc48c435b50c" (UID: "0cd223a4-37c2-4e55-9516-bc48c435b50c"). InnerVolumeSpecName "kube-api-access-nc5gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.117241 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc5gd\" (UniqueName: \"kubernetes.io/projected/0cd223a4-37c2-4e55-9516-bc48c435b50c-kube-api-access-nc5gd\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.117288 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.183293 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cd223a4-37c2-4e55-9516-bc48c435b50c" (UID: "0cd223a4-37c2-4e55-9516-bc48c435b50c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.218711 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.501855 4758 generic.go:334] "Generic (PLEG): container finished" podID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerID="ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6" exitCode=0 Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.502114 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxmq" event={"ID":"0cd223a4-37c2-4e55-9516-bc48c435b50c","Type":"ContainerDied","Data":"ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6"} Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.502155 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxmq" event={"ID":"0cd223a4-37c2-4e55-9516-bc48c435b50c","Type":"ContainerDied","Data":"604f2f2ff8b464eaacc934db9131008fb807f8f7723bad18c38405e8b17d4ba9"} Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.502170 4758 scope.go:117] "RemoveContainer" containerID="ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.502292 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.525708 4758 scope.go:117] "RemoveContainer" containerID="ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.547050 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cjxmq"] Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.564734 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cjxmq"] Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.567543 4758 scope.go:117] "RemoveContainer" containerID="e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.603160 4758 scope.go:117] "RemoveContainer" containerID="ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6" Jan 30 09:04:22 crc kubenswrapper[4758]: E0130 09:04:22.604883 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6\": container with ID starting with ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6 not found: ID does not exist" containerID="ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.604924 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6"} err="failed to get container status \"ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6\": rpc error: code = NotFound desc = could not find container \"ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6\": container with ID starting with ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6 not found: ID does not exist" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.604961 4758 scope.go:117] "RemoveContainer" containerID="ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0" Jan 30 09:04:22 crc kubenswrapper[4758]: E0130 09:04:22.605308 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0\": container with ID starting with ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0 not found: ID does not exist" containerID="ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.605364 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0"} err="failed to get container status \"ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0\": rpc error: code = NotFound desc = could not find container \"ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0\": container with ID starting with ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0 not found: ID does not exist" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.605388 4758 scope.go:117] "RemoveContainer" containerID="e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598" Jan 30 09:04:22 crc kubenswrapper[4758]: E0130 09:04:22.605757 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598\": container with ID starting with e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598 not found: ID does not exist" containerID="e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.605788 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598"} err="failed to get container status \"e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598\": rpc error: code = NotFound desc = could not find container \"e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598\": container with ID starting with e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598 not found: ID does not exist" Jan 30 09:04:23 crc kubenswrapper[4758]: I0130 09:04:23.778289 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" path="/var/lib/kubelet/pods/0cd223a4-37c2-4e55-9516-bc48c435b50c/volumes" Jan 30 09:04:43 crc kubenswrapper[4758]: I0130 09:04:43.659885 4758 generic.go:334] "Generic (PLEG): container finished" podID="a282c0aa-8c3d-4a78-9fd6-1971701a1158" containerID="2c32173feef6cb9a2c38285cc27663ba19215a548a7eac139d2cd7b311a40fd7" exitCode=0 Jan 30 09:04:43 crc kubenswrapper[4758]: I0130 09:04:43.660026 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" event={"ID":"a282c0aa-8c3d-4a78-9fd6-1971701a1158","Type":"ContainerDied","Data":"2c32173feef6cb9a2c38285cc27663ba19215a548a7eac139d2cd7b311a40fd7"} Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.084953 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.280964 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4dm2\" (UniqueName: \"kubernetes.io/projected/a282c0aa-8c3d-4a78-9fd6-1971701a1158-kube-api-access-c4dm2\") pod \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.281055 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-ssh-key-openstack-edpm-ipam\") pod \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.281234 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-inventory\") pod \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.288017 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a282c0aa-8c3d-4a78-9fd6-1971701a1158-kube-api-access-c4dm2" (OuterVolumeSpecName: "kube-api-access-c4dm2") pod "a282c0aa-8c3d-4a78-9fd6-1971701a1158" (UID: "a282c0aa-8c3d-4a78-9fd6-1971701a1158"). InnerVolumeSpecName "kube-api-access-c4dm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.312596 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a282c0aa-8c3d-4a78-9fd6-1971701a1158" (UID: "a282c0aa-8c3d-4a78-9fd6-1971701a1158"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.317878 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-inventory" (OuterVolumeSpecName: "inventory") pod "a282c0aa-8c3d-4a78-9fd6-1971701a1158" (UID: "a282c0aa-8c3d-4a78-9fd6-1971701a1158"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.383032 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.383077 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4dm2\" (UniqueName: \"kubernetes.io/projected/a282c0aa-8c3d-4a78-9fd6-1971701a1158-kube-api-access-c4dm2\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.383088 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.679599 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" event={"ID":"a282c0aa-8c3d-4a78-9fd6-1971701a1158","Type":"ContainerDied","Data":"22d6485e6d1373e9c1fd9189afdc079b81c1850b7190e78e117073ff593293a2"} Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.679646 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22d6485e6d1373e9c1fd9189afdc079b81c1850b7190e78e117073ff593293a2" Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.679678 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.197265 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r"] Jan 30 09:04:46 crc kubenswrapper[4758]: E0130 09:04:46.197877 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="registry-server" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.197889 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="registry-server" Jan 30 09:04:46 crc kubenswrapper[4758]: E0130 09:04:46.197923 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="extract-content" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.197929 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="extract-content" Jan 30 09:04:46 crc kubenswrapper[4758]: E0130 09:04:46.197945 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="extract-utilities" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.197951 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="extract-utilities" Jan 30 09:04:46 crc kubenswrapper[4758]: E0130 09:04:46.197962 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a282c0aa-8c3d-4a78-9fd6-1971701a1158" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.197969 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a282c0aa-8c3d-4a78-9fd6-1971701a1158" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.198149 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="registry-server" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.198189 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a282c0aa-8c3d-4a78-9fd6-1971701a1158" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.198752 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.201599 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.201636 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.201840 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.201872 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.227876 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r"] Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.402317 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.402690 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndddg\" (UniqueName: \"kubernetes.io/projected/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-kube-api-access-ndddg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.402729 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.504790 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndddg\" (UniqueName: \"kubernetes.io/projected/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-kube-api-access-ndddg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.504852 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.504932 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.515598 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.515823 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.566842 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndddg\" (UniqueName: \"kubernetes.io/projected/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-kube-api-access-ndddg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.821303 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:47 crc kubenswrapper[4758]: I0130 09:04:47.457488 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r"] Jan 30 09:04:47 crc kubenswrapper[4758]: I0130 09:04:47.699133 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" event={"ID":"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83","Type":"ContainerStarted","Data":"d352897def0b617952e7c67684ba624775564f16c5a6496bda83f806b89314ef"} Jan 30 09:04:49 crc kubenswrapper[4758]: I0130 09:04:49.726289 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" event={"ID":"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83","Type":"ContainerStarted","Data":"18998ab7144638e9b2dfc92ec7bcdccca85b06ffdbc64b6bcf7bb049fb552fc0"} Jan 30 09:04:49 crc kubenswrapper[4758]: I0130 09:04:49.746295 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" podStartSLOduration=1.9269244890000001 podStartE2EDuration="3.746275918s" podCreationTimestamp="2026-01-30 09:04:46 +0000 UTC" firstStartedPulling="2026-01-30 09:04:47.46866209 +0000 UTC m=+2092.440973641" lastFinishedPulling="2026-01-30 09:04:49.288013519 +0000 UTC m=+2094.260325070" observedRunningTime="2026-01-30 09:04:49.739220229 +0000 UTC m=+2094.711531780" watchObservedRunningTime="2026-01-30 09:04:49.746275918 +0000 UTC m=+2094.718587469" Jan 30 09:04:55 crc kubenswrapper[4758]: I0130 09:04:55.812065 4758 generic.go:334] "Generic (PLEG): container finished" podID="21db3b55-b11b-4ca5-a2d0-676ec4e6fb83" containerID="18998ab7144638e9b2dfc92ec7bcdccca85b06ffdbc64b6bcf7bb049fb552fc0" exitCode=0 Jan 30 09:04:55 crc kubenswrapper[4758]: I0130 09:04:55.812690 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" event={"ID":"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83","Type":"ContainerDied","Data":"18998ab7144638e9b2dfc92ec7bcdccca85b06ffdbc64b6bcf7bb049fb552fc0"} Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.323759 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.495735 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-inventory\") pod \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.496188 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndddg\" (UniqueName: \"kubernetes.io/projected/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-kube-api-access-ndddg\") pod \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.496404 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-ssh-key-openstack-edpm-ipam\") pod \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.522180 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-kube-api-access-ndddg" (OuterVolumeSpecName: "kube-api-access-ndddg") pod "21db3b55-b11b-4ca5-a2d0-676ec4e6fb83" (UID: "21db3b55-b11b-4ca5-a2d0-676ec4e6fb83"). InnerVolumeSpecName "kube-api-access-ndddg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.527618 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "21db3b55-b11b-4ca5-a2d0-676ec4e6fb83" (UID: "21db3b55-b11b-4ca5-a2d0-676ec4e6fb83"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.529604 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-inventory" (OuterVolumeSpecName: "inventory") pod "21db3b55-b11b-4ca5-a2d0-676ec4e6fb83" (UID: "21db3b55-b11b-4ca5-a2d0-676ec4e6fb83"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.598746 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndddg\" (UniqueName: \"kubernetes.io/projected/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-kube-api-access-ndddg\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.599015 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.599160 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.828557 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" event={"ID":"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83","Type":"ContainerDied","Data":"d352897def0b617952e7c67684ba624775564f16c5a6496bda83f806b89314ef"} Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.828780 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d352897def0b617952e7c67684ba624775564f16c5a6496bda83f806b89314ef" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.828622 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.932245 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8"] Jan 30 09:04:57 crc kubenswrapper[4758]: E0130 09:04:57.932628 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21db3b55-b11b-4ca5-a2d0-676ec4e6fb83" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.932646 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="21db3b55-b11b-4ca5-a2d0-676ec4e6fb83" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.932843 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="21db3b55-b11b-4ca5-a2d0-676ec4e6fb83" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.933565 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.937767 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.938186 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.938350 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.938514 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.965953 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8"] Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.005682 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.005838 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.005883 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rgvb\" (UniqueName: \"kubernetes.io/projected/39adf6a6-10cd-412d-aca4-3c68ddcf8887-kube-api-access-9rgvb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.107532 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.107601 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rgvb\" (UniqueName: \"kubernetes.io/projected/39adf6a6-10cd-412d-aca4-3c68ddcf8887-kube-api-access-9rgvb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.107681 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.112906 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.113706 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.128681 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rgvb\" (UniqueName: \"kubernetes.io/projected/39adf6a6-10cd-412d-aca4-3c68ddcf8887-kube-api-access-9rgvb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.253711 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.886805 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8"] Jan 30 09:04:59 crc kubenswrapper[4758]: I0130 09:04:59.843807 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" event={"ID":"39adf6a6-10cd-412d-aca4-3c68ddcf8887","Type":"ContainerStarted","Data":"b23bf407896421a035b030f60a01b93a3691c0ab6b6ddf66ca1f15e3a0584fb0"} Jan 30 09:04:59 crc kubenswrapper[4758]: I0130 09:04:59.844167 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" event={"ID":"39adf6a6-10cd-412d-aca4-3c68ddcf8887","Type":"ContainerStarted","Data":"ef6efd355e0b4c3febb5707c62cbcfd772fd1e6a6f9bafd7965c6c7bab5ba343"} Jan 30 09:04:59 crc kubenswrapper[4758]: I0130 09:04:59.872736 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" podStartSLOduration=2.455770913 podStartE2EDuration="2.87271257s" podCreationTimestamp="2026-01-30 09:04:57 +0000 UTC" firstStartedPulling="2026-01-30 09:04:58.894709327 +0000 UTC m=+2103.867020878" lastFinishedPulling="2026-01-30 09:04:59.311650984 +0000 UTC m=+2104.283962535" observedRunningTime="2026-01-30 09:04:59.86077432 +0000 UTC m=+2104.833085871" watchObservedRunningTime="2026-01-30 09:04:59.87271257 +0000 UTC m=+2104.845024121" Jan 30 09:05:22 crc kubenswrapper[4758]: I0130 09:05:22.387908 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:05:22 crc kubenswrapper[4758]: I0130 09:05:22.389054 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:05:46 crc kubenswrapper[4758]: I0130 09:05:46.554978 4758 generic.go:334] "Generic (PLEG): container finished" podID="39adf6a6-10cd-412d-aca4-3c68ddcf8887" containerID="b23bf407896421a035b030f60a01b93a3691c0ab6b6ddf66ca1f15e3a0584fb0" exitCode=0 Jan 30 09:05:46 crc kubenswrapper[4758]: I0130 09:05:46.555521 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" event={"ID":"39adf6a6-10cd-412d-aca4-3c68ddcf8887","Type":"ContainerDied","Data":"b23bf407896421a035b030f60a01b93a3691c0ab6b6ddf66ca1f15e3a0584fb0"} Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.002598 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.130289 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-ssh-key-openstack-edpm-ipam\") pod \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.130363 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rgvb\" (UniqueName: \"kubernetes.io/projected/39adf6a6-10cd-412d-aca4-3c68ddcf8887-kube-api-access-9rgvb\") pod \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.130555 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-inventory\") pod \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.135951 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39adf6a6-10cd-412d-aca4-3c68ddcf8887-kube-api-access-9rgvb" (OuterVolumeSpecName: "kube-api-access-9rgvb") pod "39adf6a6-10cd-412d-aca4-3c68ddcf8887" (UID: "39adf6a6-10cd-412d-aca4-3c68ddcf8887"). InnerVolumeSpecName "kube-api-access-9rgvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.160308 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "39adf6a6-10cd-412d-aca4-3c68ddcf8887" (UID: "39adf6a6-10cd-412d-aca4-3c68ddcf8887"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.169116 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-inventory" (OuterVolumeSpecName: "inventory") pod "39adf6a6-10cd-412d-aca4-3c68ddcf8887" (UID: "39adf6a6-10cd-412d-aca4-3c68ddcf8887"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.233936 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.236770 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.236794 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rgvb\" (UniqueName: \"kubernetes.io/projected/39adf6a6-10cd-412d-aca4-3c68ddcf8887-kube-api-access-9rgvb\") on node \"crc\" DevicePath \"\"" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.573174 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" event={"ID":"39adf6a6-10cd-412d-aca4-3c68ddcf8887","Type":"ContainerDied","Data":"ef6efd355e0b4c3febb5707c62cbcfd772fd1e6a6f9bafd7965c6c7bab5ba343"} Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.573240 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef6efd355e0b4c3febb5707c62cbcfd772fd1e6a6f9bafd7965c6c7bab5ba343" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.573245 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.683791 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x"] Jan 30 09:05:48 crc kubenswrapper[4758]: E0130 09:05:48.684288 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39adf6a6-10cd-412d-aca4-3c68ddcf8887" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.684314 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="39adf6a6-10cd-412d-aca4-3c68ddcf8887" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.684572 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="39adf6a6-10cd-412d-aca4-3c68ddcf8887" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.685389 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.688193 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.688243 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.688683 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.691224 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.693818 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x"] Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.848289 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.848573 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.848626 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz7sv\" (UniqueName: \"kubernetes.io/projected/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-kube-api-access-zz7sv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.951176 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.951865 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz7sv\" (UniqueName: \"kubernetes.io/projected/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-kube-api-access-zz7sv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.952535 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.959212 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.961516 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.972178 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz7sv\" (UniqueName: \"kubernetes.io/projected/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-kube-api-access-zz7sv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:49 crc kubenswrapper[4758]: I0130 09:05:49.034132 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:49 crc kubenswrapper[4758]: I0130 09:05:49.611618 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x"] Jan 30 09:05:50 crc kubenswrapper[4758]: I0130 09:05:50.589033 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" event={"ID":"b5727310-9e0b-40f5-ae4e-209ed7d3ee36","Type":"ContainerStarted","Data":"5793e1001e78c7d540918afb3a8f95ec2302b1ab81096461a7cfe99d2adc1bb6"} Jan 30 09:05:50 crc kubenswrapper[4758]: I0130 09:05:50.589420 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" event={"ID":"b5727310-9e0b-40f5-ae4e-209ed7d3ee36","Type":"ContainerStarted","Data":"ae501280db69d617245184bac11d4250b957b0ded8237a8688a17ead9f427506"} Jan 30 09:05:50 crc kubenswrapper[4758]: I0130 09:05:50.605520 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" podStartSLOduration=2.041347269 podStartE2EDuration="2.605504078s" podCreationTimestamp="2026-01-30 09:05:48 +0000 UTC" firstStartedPulling="2026-01-30 09:05:49.626200924 +0000 UTC m=+2154.598512475" lastFinishedPulling="2026-01-30 09:05:50.190357733 +0000 UTC m=+2155.162669284" observedRunningTime="2026-01-30 09:05:50.603608609 +0000 UTC m=+2155.575920160" watchObservedRunningTime="2026-01-30 09:05:50.605504078 +0000 UTC m=+2155.577815629" Jan 30 09:05:52 crc kubenswrapper[4758]: I0130 09:05:52.387805 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:05:52 crc kubenswrapper[4758]: I0130 09:05:52.388162 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.387320 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.387895 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.387936 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.388669 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.388719 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" gracePeriod=600 Jan 30 09:06:22 crc kubenswrapper[4758]: E0130 09:06:22.533286 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.847822 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" exitCode=0 Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.847887 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41"} Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.848089 4758 scope.go:117] "RemoveContainer" containerID="a4e6da0cc99379149fae03030aa7bdc7f1cee2816f3c69e88906bfdf6d6dfda0" Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.848779 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:06:22 crc kubenswrapper[4758]: E0130 09:06:22.849075 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:06:33 crc kubenswrapper[4758]: I0130 09:06:33.769265 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:06:33 crc kubenswrapper[4758]: E0130 09:06:33.773598 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:06:38 crc kubenswrapper[4758]: I0130 09:06:38.977892 4758 generic.go:334] "Generic (PLEG): container finished" podID="b5727310-9e0b-40f5-ae4e-209ed7d3ee36" containerID="5793e1001e78c7d540918afb3a8f95ec2302b1ab81096461a7cfe99d2adc1bb6" exitCode=0 Jan 30 09:06:38 crc kubenswrapper[4758]: I0130 09:06:38.978475 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" event={"ID":"b5727310-9e0b-40f5-ae4e-209ed7d3ee36","Type":"ContainerDied","Data":"5793e1001e78c7d540918afb3a8f95ec2302b1ab81096461a7cfe99d2adc1bb6"} Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.407621 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.590796 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-inventory\") pod \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.590917 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-ssh-key-openstack-edpm-ipam\") pod \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.590992 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz7sv\" (UniqueName: \"kubernetes.io/projected/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-kube-api-access-zz7sv\") pod \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.597235 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-kube-api-access-zz7sv" (OuterVolumeSpecName: "kube-api-access-zz7sv") pod "b5727310-9e0b-40f5-ae4e-209ed7d3ee36" (UID: "b5727310-9e0b-40f5-ae4e-209ed7d3ee36"). InnerVolumeSpecName "kube-api-access-zz7sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.621080 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-inventory" (OuterVolumeSpecName: "inventory") pod "b5727310-9e0b-40f5-ae4e-209ed7d3ee36" (UID: "b5727310-9e0b-40f5-ae4e-209ed7d3ee36"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.638941 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b5727310-9e0b-40f5-ae4e-209ed7d3ee36" (UID: "b5727310-9e0b-40f5-ae4e-209ed7d3ee36"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.693731 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.693764 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.693776 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz7sv\" (UniqueName: \"kubernetes.io/projected/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-kube-api-access-zz7sv\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.994113 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" event={"ID":"b5727310-9e0b-40f5-ae4e-209ed7d3ee36","Type":"ContainerDied","Data":"ae501280db69d617245184bac11d4250b957b0ded8237a8688a17ead9f427506"} Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.994156 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae501280db69d617245184bac11d4250b957b0ded8237a8688a17ead9f427506" Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.994201 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.237349 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wlb6p"] Jan 30 09:06:41 crc kubenswrapper[4758]: E0130 09:06:41.238112 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5727310-9e0b-40f5-ae4e-209ed7d3ee36" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.238154 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5727310-9e0b-40f5-ae4e-209ed7d3ee36" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.238463 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5727310-9e0b-40f5-ae4e-209ed7d3ee36" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.241373 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.250124 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.250862 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.253877 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.254549 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.255419 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.255539 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.255682 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jslf\" (UniqueName: \"kubernetes.io/projected/1135df14-5d2f-47ff-9038-fce2addc71d0-kube-api-access-4jslf\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.258024 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wlb6p"] Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.359186 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.359323 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.359429 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jslf\" (UniqueName: \"kubernetes.io/projected/1135df14-5d2f-47ff-9038-fce2addc71d0-kube-api-access-4jslf\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.363374 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.365072 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.380030 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jslf\" (UniqueName: \"kubernetes.io/projected/1135df14-5d2f-47ff-9038-fce2addc71d0-kube-api-access-4jslf\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.567423 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:42 crc kubenswrapper[4758]: I0130 09:06:42.067148 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wlb6p"] Jan 30 09:06:43 crc kubenswrapper[4758]: I0130 09:06:43.010376 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" event={"ID":"1135df14-5d2f-47ff-9038-fce2addc71d0","Type":"ContainerStarted","Data":"f745e166067f02a1df0dfdb9c6ca2f5f3ef50a342ff6043a33e62e116963b93f"} Jan 30 09:06:43 crc kubenswrapper[4758]: I0130 09:06:43.011711 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" event={"ID":"1135df14-5d2f-47ff-9038-fce2addc71d0","Type":"ContainerStarted","Data":"b68ff98e80b33635b3d571d2ad2075a64410d86b11bdbe49b49e782727e62910"} Jan 30 09:06:43 crc kubenswrapper[4758]: I0130 09:06:43.031089 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" podStartSLOduration=1.5333261679999999 podStartE2EDuration="2.031017649s" podCreationTimestamp="2026-01-30 09:06:41 +0000 UTC" firstStartedPulling="2026-01-30 09:06:42.076931195 +0000 UTC m=+2207.049242756" lastFinishedPulling="2026-01-30 09:06:42.574622686 +0000 UTC m=+2207.546934237" observedRunningTime="2026-01-30 09:06:43.026610563 +0000 UTC m=+2207.998922124" watchObservedRunningTime="2026-01-30 09:06:43.031017649 +0000 UTC m=+2208.003329200" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.373781 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lqb9z"] Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.375957 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.428408 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lqb9z"] Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.518968 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-catalog-content\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.519027 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-utilities\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.519089 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzmz2\" (UniqueName: \"kubernetes.io/projected/805974e6-14ab-447b-a106-e126ae4d93fa-kube-api-access-fzmz2\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.620825 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-catalog-content\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.620889 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-utilities\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.620939 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzmz2\" (UniqueName: \"kubernetes.io/projected/805974e6-14ab-447b-a106-e126ae4d93fa-kube-api-access-fzmz2\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.621442 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-catalog-content\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.621543 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-utilities\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.647437 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzmz2\" (UniqueName: \"kubernetes.io/projected/805974e6-14ab-447b-a106-e126ae4d93fa-kube-api-access-fzmz2\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.701032 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:45 crc kubenswrapper[4758]: I0130 09:06:45.231292 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lqb9z"] Jan 30 09:06:46 crc kubenswrapper[4758]: I0130 09:06:46.034224 4758 generic.go:334] "Generic (PLEG): container finished" podID="805974e6-14ab-447b-a106-e126ae4d93fa" containerID="8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd" exitCode=0 Jan 30 09:06:46 crc kubenswrapper[4758]: I0130 09:06:46.034292 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqb9z" event={"ID":"805974e6-14ab-447b-a106-e126ae4d93fa","Type":"ContainerDied","Data":"8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd"} Jan 30 09:06:46 crc kubenswrapper[4758]: I0130 09:06:46.035676 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqb9z" event={"ID":"805974e6-14ab-447b-a106-e126ae4d93fa","Type":"ContainerStarted","Data":"08100a81bfc18cc43a5cb8fab19ef90b5843598a774b1d588c15073d0cabb9f4"} Jan 30 09:06:46 crc kubenswrapper[4758]: I0130 09:06:46.769123 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:06:46 crc kubenswrapper[4758]: E0130 09:06:46.769876 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:06:47 crc kubenswrapper[4758]: I0130 09:06:47.044419 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqb9z" event={"ID":"805974e6-14ab-447b-a106-e126ae4d93fa","Type":"ContainerStarted","Data":"3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce"} Jan 30 09:06:48 crc kubenswrapper[4758]: I0130 09:06:48.055189 4758 generic.go:334] "Generic (PLEG): container finished" podID="805974e6-14ab-447b-a106-e126ae4d93fa" containerID="3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce" exitCode=0 Jan 30 09:06:48 crc kubenswrapper[4758]: I0130 09:06:48.055261 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqb9z" event={"ID":"805974e6-14ab-447b-a106-e126ae4d93fa","Type":"ContainerDied","Data":"3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce"} Jan 30 09:06:49 crc kubenswrapper[4758]: I0130 09:06:49.066657 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqb9z" event={"ID":"805974e6-14ab-447b-a106-e126ae4d93fa","Type":"ContainerStarted","Data":"4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3"} Jan 30 09:06:50 crc kubenswrapper[4758]: I0130 09:06:50.082748 4758 generic.go:334] "Generic (PLEG): container finished" podID="1135df14-5d2f-47ff-9038-fce2addc71d0" containerID="f745e166067f02a1df0dfdb9c6ca2f5f3ef50a342ff6043a33e62e116963b93f" exitCode=0 Jan 30 09:06:50 crc kubenswrapper[4758]: I0130 09:06:50.083161 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" event={"ID":"1135df14-5d2f-47ff-9038-fce2addc71d0","Type":"ContainerDied","Data":"f745e166067f02a1df0dfdb9c6ca2f5f3ef50a342ff6043a33e62e116963b93f"} Jan 30 09:06:50 crc kubenswrapper[4758]: I0130 09:06:50.114178 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lqb9z" podStartSLOduration=3.692465323 podStartE2EDuration="6.114150151s" podCreationTimestamp="2026-01-30 09:06:44 +0000 UTC" firstStartedPulling="2026-01-30 09:06:46.036299459 +0000 UTC m=+2211.008611010" lastFinishedPulling="2026-01-30 09:06:48.457984287 +0000 UTC m=+2213.430295838" observedRunningTime="2026-01-30 09:06:49.090158293 +0000 UTC m=+2214.062469854" watchObservedRunningTime="2026-01-30 09:06:50.114150151 +0000 UTC m=+2215.086461722" Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.523319 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.661830 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-inventory-0\") pod \"1135df14-5d2f-47ff-9038-fce2addc71d0\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.661926 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jslf\" (UniqueName: \"kubernetes.io/projected/1135df14-5d2f-47ff-9038-fce2addc71d0-kube-api-access-4jslf\") pod \"1135df14-5d2f-47ff-9038-fce2addc71d0\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.662076 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-ssh-key-openstack-edpm-ipam\") pod \"1135df14-5d2f-47ff-9038-fce2addc71d0\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.669951 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1135df14-5d2f-47ff-9038-fce2addc71d0-kube-api-access-4jslf" (OuterVolumeSpecName: "kube-api-access-4jslf") pod "1135df14-5d2f-47ff-9038-fce2addc71d0" (UID: "1135df14-5d2f-47ff-9038-fce2addc71d0"). InnerVolumeSpecName "kube-api-access-4jslf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.690741 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "1135df14-5d2f-47ff-9038-fce2addc71d0" (UID: "1135df14-5d2f-47ff-9038-fce2addc71d0"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.692657 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1135df14-5d2f-47ff-9038-fce2addc71d0" (UID: "1135df14-5d2f-47ff-9038-fce2addc71d0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.763889 4758 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.763932 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jslf\" (UniqueName: \"kubernetes.io/projected/1135df14-5d2f-47ff-9038-fce2addc71d0-kube-api-access-4jslf\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.763948 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.131980 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" event={"ID":"1135df14-5d2f-47ff-9038-fce2addc71d0","Type":"ContainerDied","Data":"b68ff98e80b33635b3d571d2ad2075a64410d86b11bdbe49b49e782727e62910"} Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.132022 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b68ff98e80b33635b3d571d2ad2075a64410d86b11bdbe49b49e782727e62910" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.132096 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.256966 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t"] Jan 30 09:06:52 crc kubenswrapper[4758]: E0130 09:06:52.257695 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1135df14-5d2f-47ff-9038-fce2addc71d0" containerName="ssh-known-hosts-edpm-deployment" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.257715 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1135df14-5d2f-47ff-9038-fce2addc71d0" containerName="ssh-known-hosts-edpm-deployment" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.257968 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1135df14-5d2f-47ff-9038-fce2addc71d0" containerName="ssh-known-hosts-edpm-deployment" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.258734 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.261482 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.261624 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.261654 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.263896 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.322163 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t"] Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.408588 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.408661 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qddm4\" (UniqueName: \"kubernetes.io/projected/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-kube-api-access-qddm4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.408701 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.510878 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.510967 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qddm4\" (UniqueName: \"kubernetes.io/projected/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-kube-api-access-qddm4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.511017 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.517886 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.534079 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.537601 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qddm4\" (UniqueName: \"kubernetes.io/projected/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-kube-api-access-qddm4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.617411 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:53 crc kubenswrapper[4758]: I0130 09:06:53.156337 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t"] Jan 30 09:06:53 crc kubenswrapper[4758]: W0130 09:06:53.162307 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b0522c4_4143_4ac3_b5c7_ea6c073dfc38.slice/crio-a6020fe2ceeedef7d16c580ed1a3dac97c3b235fc2fecdcb4c27de07b2352c43 WatchSource:0}: Error finding container a6020fe2ceeedef7d16c580ed1a3dac97c3b235fc2fecdcb4c27de07b2352c43: Status 404 returned error can't find the container with id a6020fe2ceeedef7d16c580ed1a3dac97c3b235fc2fecdcb4c27de07b2352c43 Jan 30 09:06:54 crc kubenswrapper[4758]: I0130 09:06:54.148504 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" event={"ID":"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38","Type":"ContainerStarted","Data":"adb32b8102ded5b49f066a27ed9dfacb49df27b76fdbf55f0df66a6e3c6e4218"} Jan 30 09:06:54 crc kubenswrapper[4758]: I0130 09:06:54.148875 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" event={"ID":"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38","Type":"ContainerStarted","Data":"a6020fe2ceeedef7d16c580ed1a3dac97c3b235fc2fecdcb4c27de07b2352c43"} Jan 30 09:06:54 crc kubenswrapper[4758]: I0130 09:06:54.169479 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" podStartSLOduration=1.7024343929999999 podStartE2EDuration="2.169457404s" podCreationTimestamp="2026-01-30 09:06:52 +0000 UTC" firstStartedPulling="2026-01-30 09:06:53.166459297 +0000 UTC m=+2218.138770848" lastFinishedPulling="2026-01-30 09:06:53.633482288 +0000 UTC m=+2218.605793859" observedRunningTime="2026-01-30 09:06:54.163359246 +0000 UTC m=+2219.135670797" watchObservedRunningTime="2026-01-30 09:06:54.169457404 +0000 UTC m=+2219.141768965" Jan 30 09:06:54 crc kubenswrapper[4758]: I0130 09:06:54.701680 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:54 crc kubenswrapper[4758]: I0130 09:06:54.702022 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:54 crc kubenswrapper[4758]: I0130 09:06:54.748109 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:55 crc kubenswrapper[4758]: I0130 09:06:55.218430 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:55 crc kubenswrapper[4758]: I0130 09:06:55.266881 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lqb9z"] Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.173421 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lqb9z" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" containerName="registry-server" containerID="cri-o://4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3" gracePeriod=2 Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.742678 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.827120 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-utilities\") pod \"805974e6-14ab-447b-a106-e126ae4d93fa\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.827327 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzmz2\" (UniqueName: \"kubernetes.io/projected/805974e6-14ab-447b-a106-e126ae4d93fa-kube-api-access-fzmz2\") pod \"805974e6-14ab-447b-a106-e126ae4d93fa\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.827359 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-catalog-content\") pod \"805974e6-14ab-447b-a106-e126ae4d93fa\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.828211 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-utilities" (OuterVolumeSpecName: "utilities") pod "805974e6-14ab-447b-a106-e126ae4d93fa" (UID: "805974e6-14ab-447b-a106-e126ae4d93fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.834396 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/805974e6-14ab-447b-a106-e126ae4d93fa-kube-api-access-fzmz2" (OuterVolumeSpecName: "kube-api-access-fzmz2") pod "805974e6-14ab-447b-a106-e126ae4d93fa" (UID: "805974e6-14ab-447b-a106-e126ae4d93fa"). InnerVolumeSpecName "kube-api-access-fzmz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.889178 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "805974e6-14ab-447b-a106-e126ae4d93fa" (UID: "805974e6-14ab-447b-a106-e126ae4d93fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.931019 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.931147 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzmz2\" (UniqueName: \"kubernetes.io/projected/805974e6-14ab-447b-a106-e126ae4d93fa-kube-api-access-fzmz2\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.931163 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.184089 4758 generic.go:334] "Generic (PLEG): container finished" podID="805974e6-14ab-447b-a106-e126ae4d93fa" containerID="4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3" exitCode=0 Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.184132 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqb9z" event={"ID":"805974e6-14ab-447b-a106-e126ae4d93fa","Type":"ContainerDied","Data":"4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3"} Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.184164 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqb9z" event={"ID":"805974e6-14ab-447b-a106-e126ae4d93fa","Type":"ContainerDied","Data":"08100a81bfc18cc43a5cb8fab19ef90b5843598a774b1d588c15073d0cabb9f4"} Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.184184 4758 scope.go:117] "RemoveContainer" containerID="4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.184324 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.228287 4758 scope.go:117] "RemoveContainer" containerID="3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.237480 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lqb9z"] Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.246594 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lqb9z"] Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.255256 4758 scope.go:117] "RemoveContainer" containerID="8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.302318 4758 scope.go:117] "RemoveContainer" containerID="4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3" Jan 30 09:06:58 crc kubenswrapper[4758]: E0130 09:06:58.303296 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3\": container with ID starting with 4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3 not found: ID does not exist" containerID="4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.303367 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3"} err="failed to get container status \"4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3\": rpc error: code = NotFound desc = could not find container \"4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3\": container with ID starting with 4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3 not found: ID does not exist" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.303401 4758 scope.go:117] "RemoveContainer" containerID="3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce" Jan 30 09:06:58 crc kubenswrapper[4758]: E0130 09:06:58.304398 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce\": container with ID starting with 3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce not found: ID does not exist" containerID="3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.304432 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce"} err="failed to get container status \"3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce\": rpc error: code = NotFound desc = could not find container \"3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce\": container with ID starting with 3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce not found: ID does not exist" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.304452 4758 scope.go:117] "RemoveContainer" containerID="8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd" Jan 30 09:06:58 crc kubenswrapper[4758]: E0130 09:06:58.304872 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd\": container with ID starting with 8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd not found: ID does not exist" containerID="8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.304908 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd"} err="failed to get container status \"8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd\": rpc error: code = NotFound desc = could not find container \"8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd\": container with ID starting with 8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd not found: ID does not exist" Jan 30 09:06:59 crc kubenswrapper[4758]: I0130 09:06:59.779591 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" path="/var/lib/kubelet/pods/805974e6-14ab-447b-a106-e126ae4d93fa/volumes" Jan 30 09:07:01 crc kubenswrapper[4758]: I0130 09:07:01.775715 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:07:01 crc kubenswrapper[4758]: E0130 09:07:01.776564 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:07:02 crc kubenswrapper[4758]: I0130 09:07:02.220084 4758 generic.go:334] "Generic (PLEG): container finished" podID="2b0522c4-4143-4ac3-b5c7-ea6c073dfc38" containerID="adb32b8102ded5b49f066a27ed9dfacb49df27b76fdbf55f0df66a6e3c6e4218" exitCode=0 Jan 30 09:07:02 crc kubenswrapper[4758]: I0130 09:07:02.220145 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" event={"ID":"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38","Type":"ContainerDied","Data":"adb32b8102ded5b49f066a27ed9dfacb49df27b76fdbf55f0df66a6e3c6e4218"} Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.614852 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.738394 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qddm4\" (UniqueName: \"kubernetes.io/projected/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-kube-api-access-qddm4\") pod \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.738520 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-ssh-key-openstack-edpm-ipam\") pod \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.739392 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-inventory\") pod \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.743670 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-kube-api-access-qddm4" (OuterVolumeSpecName: "kube-api-access-qddm4") pod "2b0522c4-4143-4ac3-b5c7-ea6c073dfc38" (UID: "2b0522c4-4143-4ac3-b5c7-ea6c073dfc38"). InnerVolumeSpecName "kube-api-access-qddm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.764644 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2b0522c4-4143-4ac3-b5c7-ea6c073dfc38" (UID: "2b0522c4-4143-4ac3-b5c7-ea6c073dfc38"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.773200 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-inventory" (OuterVolumeSpecName: "inventory") pod "2b0522c4-4143-4ac3-b5c7-ea6c073dfc38" (UID: "2b0522c4-4143-4ac3-b5c7-ea6c073dfc38"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.841554 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.841587 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qddm4\" (UniqueName: \"kubernetes.io/projected/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-kube-api-access-qddm4\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.841604 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.237807 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" event={"ID":"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38","Type":"ContainerDied","Data":"a6020fe2ceeedef7d16c580ed1a3dac97c3b235fc2fecdcb4c27de07b2352c43"} Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.238104 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6020fe2ceeedef7d16c580ed1a3dac97c3b235fc2fecdcb4c27de07b2352c43" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.237870 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.318750 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn"] Jan 30 09:07:04 crc kubenswrapper[4758]: E0130 09:07:04.319236 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" containerName="registry-server" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.319251 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" containerName="registry-server" Jan 30 09:07:04 crc kubenswrapper[4758]: E0130 09:07:04.319277 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" containerName="extract-utilities" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.319285 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" containerName="extract-utilities" Jan 30 09:07:04 crc kubenswrapper[4758]: E0130 09:07:04.319304 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0522c4-4143-4ac3-b5c7-ea6c073dfc38" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.319313 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0522c4-4143-4ac3-b5c7-ea6c073dfc38" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:04 crc kubenswrapper[4758]: E0130 09:07:04.319331 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" containerName="extract-content" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.319340 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" containerName="extract-content" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.319671 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" containerName="registry-server" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.319684 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b0522c4-4143-4ac3-b5c7-ea6c073dfc38" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.320436 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.322780 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.324868 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.326363 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.330447 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.333177 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn"] Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.351102 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.351160 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.351302 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4rff\" (UniqueName: \"kubernetes.io/projected/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-kube-api-access-p4rff\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.453355 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.453593 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.453744 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4rff\" (UniqueName: \"kubernetes.io/projected/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-kube-api-access-p4rff\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.458906 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.465680 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.473086 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4rff\" (UniqueName: \"kubernetes.io/projected/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-kube-api-access-p4rff\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.646711 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:05 crc kubenswrapper[4758]: I0130 09:07:05.154692 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn"] Jan 30 09:07:05 crc kubenswrapper[4758]: I0130 09:07:05.251377 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" event={"ID":"49ef08a8-e5d8-4a62-bc4d-227948c7fa12","Type":"ContainerStarted","Data":"13618976f3c2154c8026ea564ec1c91778bede8278c88a0f4bb4dc6df457d816"} Jan 30 09:07:06 crc kubenswrapper[4758]: I0130 09:07:06.260126 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" event={"ID":"49ef08a8-e5d8-4a62-bc4d-227948c7fa12","Type":"ContainerStarted","Data":"4e7755a7692a4a0ab57a1634916ed6268ef5b07f500e55f0e1b24eb2510f3a70"} Jan 30 09:07:06 crc kubenswrapper[4758]: I0130 09:07:06.276032 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" podStartSLOduration=1.842255617 podStartE2EDuration="2.276007988s" podCreationTimestamp="2026-01-30 09:07:04 +0000 UTC" firstStartedPulling="2026-01-30 09:07:05.153959014 +0000 UTC m=+2230.126270565" lastFinishedPulling="2026-01-30 09:07:05.587711385 +0000 UTC m=+2230.560022936" observedRunningTime="2026-01-30 09:07:06.274829731 +0000 UTC m=+2231.247141292" watchObservedRunningTime="2026-01-30 09:07:06.276007988 +0000 UTC m=+2231.248319549" Jan 30 09:07:14 crc kubenswrapper[4758]: I0130 09:07:14.768134 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:07:14 crc kubenswrapper[4758]: E0130 09:07:14.769813 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:07:15 crc kubenswrapper[4758]: I0130 09:07:15.346008 4758 generic.go:334] "Generic (PLEG): container finished" podID="49ef08a8-e5d8-4a62-bc4d-227948c7fa12" containerID="4e7755a7692a4a0ab57a1634916ed6268ef5b07f500e55f0e1b24eb2510f3a70" exitCode=0 Jan 30 09:07:15 crc kubenswrapper[4758]: I0130 09:07:15.346108 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" event={"ID":"49ef08a8-e5d8-4a62-bc4d-227948c7fa12","Type":"ContainerDied","Data":"4e7755a7692a4a0ab57a1634916ed6268ef5b07f500e55f0e1b24eb2510f3a70"} Jan 30 09:07:16 crc kubenswrapper[4758]: I0130 09:07:16.982696 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.101103 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-ssh-key-openstack-edpm-ipam\") pod \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.101219 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4rff\" (UniqueName: \"kubernetes.io/projected/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-kube-api-access-p4rff\") pod \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.101325 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-inventory\") pod \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.107269 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-kube-api-access-p4rff" (OuterVolumeSpecName: "kube-api-access-p4rff") pod "49ef08a8-e5d8-4a62-bc4d-227948c7fa12" (UID: "49ef08a8-e5d8-4a62-bc4d-227948c7fa12"). InnerVolumeSpecName "kube-api-access-p4rff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.130689 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "49ef08a8-e5d8-4a62-bc4d-227948c7fa12" (UID: "49ef08a8-e5d8-4a62-bc4d-227948c7fa12"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.137848 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-inventory" (OuterVolumeSpecName: "inventory") pod "49ef08a8-e5d8-4a62-bc4d-227948c7fa12" (UID: "49ef08a8-e5d8-4a62-bc4d-227948c7fa12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.204191 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.204237 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4rff\" (UniqueName: \"kubernetes.io/projected/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-kube-api-access-p4rff\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.204247 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.365215 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" event={"ID":"49ef08a8-e5d8-4a62-bc4d-227948c7fa12","Type":"ContainerDied","Data":"13618976f3c2154c8026ea564ec1c91778bede8278c88a0f4bb4dc6df457d816"} Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.365760 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13618976f3c2154c8026ea564ec1c91778bede8278c88a0f4bb4dc6df457d816" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.365303 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.479941 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq"] Jan 30 09:07:17 crc kubenswrapper[4758]: E0130 09:07:17.480442 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ef08a8-e5d8-4a62-bc4d-227948c7fa12" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.480466 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ef08a8-e5d8-4a62-bc4d-227948c7fa12" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.480672 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ef08a8-e5d8-4a62-bc4d-227948c7fa12" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.481325 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.483571 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.484758 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.484939 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.485166 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.485932 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.485984 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.486133 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.486295 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.513464 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq"] Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.614807 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.614894 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.614946 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.614971 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615004 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615027 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cw65\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-kube-api-access-8cw65\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615067 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615094 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615134 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615157 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615177 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615206 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615227 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615251 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.717574 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.717628 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.717673 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.717694 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cw65\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-kube-api-access-8cw65\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.717715 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.717745 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.718148 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.718508 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.718538 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.718580 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.718606 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.718650 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.718709 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.718774 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.723138 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.723336 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.725850 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.726420 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.726606 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.726974 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.727952 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.728064 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.728158 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.728503 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.728922 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.730393 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.732138 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.736669 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cw65\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-kube-api-access-8cw65\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.798309 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:18 crc kubenswrapper[4758]: I0130 09:07:18.314917 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq"] Jan 30 09:07:18 crc kubenswrapper[4758]: I0130 09:07:18.379537 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" event={"ID":"836593cc-2b98-4f54-8407-6d92687559f5","Type":"ContainerStarted","Data":"da639f6e610dd22a1e9e3a2924b8ba74eae6e646235058c322474a41909bb605"} Jan 30 09:07:19 crc kubenswrapper[4758]: I0130 09:07:19.388776 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" event={"ID":"836593cc-2b98-4f54-8407-6d92687559f5","Type":"ContainerStarted","Data":"1a4a56b9bf7992228fe58f7b359b41903dfb2553058e5d89e4cee6d5834254c2"} Jan 30 09:07:19 crc kubenswrapper[4758]: I0130 09:07:19.408549 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" podStartSLOduration=2.013281431 podStartE2EDuration="2.40853058s" podCreationTimestamp="2026-01-30 09:07:17 +0000 UTC" firstStartedPulling="2026-01-30 09:07:18.319018323 +0000 UTC m=+2243.291329874" lastFinishedPulling="2026-01-30 09:07:18.714267482 +0000 UTC m=+2243.686579023" observedRunningTime="2026-01-30 09:07:19.405672122 +0000 UTC m=+2244.377983673" watchObservedRunningTime="2026-01-30 09:07:19.40853058 +0000 UTC m=+2244.380842131" Jan 30 09:07:29 crc kubenswrapper[4758]: I0130 09:07:29.769432 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:07:29 crc kubenswrapper[4758]: E0130 09:07:29.770304 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:07:40 crc kubenswrapper[4758]: I0130 09:07:40.769671 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:07:40 crc kubenswrapper[4758]: E0130 09:07:40.770682 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:07:55 crc kubenswrapper[4758]: I0130 09:07:55.777943 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:07:55 crc kubenswrapper[4758]: E0130 09:07:55.778737 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:07:56 crc kubenswrapper[4758]: I0130 09:07:56.715319 4758 generic.go:334] "Generic (PLEG): container finished" podID="836593cc-2b98-4f54-8407-6d92687559f5" containerID="1a4a56b9bf7992228fe58f7b359b41903dfb2553058e5d89e4cee6d5834254c2" exitCode=0 Jan 30 09:07:56 crc kubenswrapper[4758]: I0130 09:07:56.715368 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" event={"ID":"836593cc-2b98-4f54-8407-6d92687559f5","Type":"ContainerDied","Data":"1a4a56b9bf7992228fe58f7b359b41903dfb2553058e5d89e4cee6d5834254c2"} Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.216502 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379373 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-inventory\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379450 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-libvirt-combined-ca-bundle\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379522 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-bootstrap-combined-ca-bundle\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379569 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-nova-combined-ca-bundle\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379592 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-neutron-metadata-combined-ca-bundle\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379633 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379669 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ssh-key-openstack-edpm-ipam\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379700 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379730 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.380553 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cw65\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-kube-api-access-8cw65\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.380958 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-telemetry-combined-ca-bundle\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.381011 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-repo-setup-combined-ca-bundle\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.381365 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.381450 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ovn-combined-ca-bundle\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.386614 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.388013 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.388865 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-kube-api-access-8cw65" (OuterVolumeSpecName: "kube-api-access-8cw65") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "kube-api-access-8cw65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.391804 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.391837 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.392189 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.392885 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.392787 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.393929 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.404249 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.404257 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.412746 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.432764 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-inventory" (OuterVolumeSpecName: "inventory") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.437314 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484489 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484525 4758 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484538 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484548 4758 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484560 4758 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484568 4758 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484576 4758 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484585 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484595 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484606 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484615 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484628 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cw65\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-kube-api-access-8cw65\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484638 4758 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484646 4758 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.734602 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" event={"ID":"836593cc-2b98-4f54-8407-6d92687559f5","Type":"ContainerDied","Data":"da639f6e610dd22a1e9e3a2924b8ba74eae6e646235058c322474a41909bb605"} Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.735030 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da639f6e610dd22a1e9e3a2924b8ba74eae6e646235058c322474a41909bb605" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.734970 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.880419 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l"] Jan 30 09:07:58 crc kubenswrapper[4758]: E0130 09:07:58.880811 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836593cc-2b98-4f54-8407-6d92687559f5" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.880830 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="836593cc-2b98-4f54-8407-6d92687559f5" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.881062 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="836593cc-2b98-4f54-8407-6d92687559f5" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.881746 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.888352 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.888504 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.888595 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.888631 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.892756 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.921541 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l"] Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.993573 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flnhg\" (UniqueName: \"kubernetes.io/projected/9db7f310-f803-4981-8a55-5d45e9015488-kube-api-access-flnhg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.993631 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9db7f310-f803-4981-8a55-5d45e9015488-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.993663 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.993747 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.993824 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.095844 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flnhg\" (UniqueName: \"kubernetes.io/projected/9db7f310-f803-4981-8a55-5d45e9015488-kube-api-access-flnhg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.095893 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9db7f310-f803-4981-8a55-5d45e9015488-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.095918 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.096003 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.096122 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.098072 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9db7f310-f803-4981-8a55-5d45e9015488-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.100396 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.109068 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.109451 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.113688 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flnhg\" (UniqueName: \"kubernetes.io/projected/9db7f310-f803-4981-8a55-5d45e9015488-kube-api-access-flnhg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.199845 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.735604 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l"] Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.748435 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" event={"ID":"9db7f310-f803-4981-8a55-5d45e9015488","Type":"ContainerStarted","Data":"43854d2e550b3a157e48ef29ea2036779fa7f76a1d19050140b07be00b088424"} Jan 30 09:08:00 crc kubenswrapper[4758]: I0130 09:08:00.757010 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" event={"ID":"9db7f310-f803-4981-8a55-5d45e9015488","Type":"ContainerStarted","Data":"37ef71ddd39ff51e9fb156dd6e3018baa774422d52cbb49086cb98d66a9bd3e4"} Jan 30 09:08:00 crc kubenswrapper[4758]: I0130 09:08:00.778790 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" podStartSLOduration=2.344092603 podStartE2EDuration="2.778765663s" podCreationTimestamp="2026-01-30 09:07:58 +0000 UTC" firstStartedPulling="2026-01-30 09:07:59.735953978 +0000 UTC m=+2284.708265529" lastFinishedPulling="2026-01-30 09:08:00.170627038 +0000 UTC m=+2285.142938589" observedRunningTime="2026-01-30 09:08:00.772170048 +0000 UTC m=+2285.744481609" watchObservedRunningTime="2026-01-30 09:08:00.778765663 +0000 UTC m=+2285.751077214" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.346230 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bq4b9"] Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.348503 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.363527 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq4b9"] Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.447401 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfvsp\" (UniqueName: \"kubernetes.io/projected/66af2af4-4490-4d94-b90f-c6c7c64c06b0-kube-api-access-dfvsp\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.447486 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-utilities\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.447571 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-catalog-content\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.549483 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-utilities\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.549594 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-catalog-content\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.549716 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfvsp\" (UniqueName: \"kubernetes.io/projected/66af2af4-4490-4d94-b90f-c6c7c64c06b0-kube-api-access-dfvsp\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.550593 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-utilities\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.550873 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-catalog-content\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.601700 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfvsp\" (UniqueName: \"kubernetes.io/projected/66af2af4-4490-4d94-b90f-c6c7c64c06b0-kube-api-access-dfvsp\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.667567 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:02 crc kubenswrapper[4758]: I0130 09:08:02.190199 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq4b9"] Jan 30 09:08:02 crc kubenswrapper[4758]: I0130 09:08:02.773590 4758 generic.go:334] "Generic (PLEG): container finished" podID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerID="f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa" exitCode=0 Jan 30 09:08:02 crc kubenswrapper[4758]: I0130 09:08:02.773814 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq4b9" event={"ID":"66af2af4-4490-4d94-b90f-c6c7c64c06b0","Type":"ContainerDied","Data":"f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa"} Jan 30 09:08:02 crc kubenswrapper[4758]: I0130 09:08:02.773835 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq4b9" event={"ID":"66af2af4-4490-4d94-b90f-c6c7c64c06b0","Type":"ContainerStarted","Data":"079f8dfa5914c4bc538986134200346f8bd8704f1601fa58efa1d5cb8fa02289"} Jan 30 09:08:03 crc kubenswrapper[4758]: I0130 09:08:03.786423 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq4b9" event={"ID":"66af2af4-4490-4d94-b90f-c6c7c64c06b0","Type":"ContainerStarted","Data":"f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8"} Jan 30 09:08:04 crc kubenswrapper[4758]: I0130 09:08:04.797249 4758 generic.go:334] "Generic (PLEG): container finished" podID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerID="f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8" exitCode=0 Jan 30 09:08:04 crc kubenswrapper[4758]: I0130 09:08:04.797306 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq4b9" event={"ID":"66af2af4-4490-4d94-b90f-c6c7c64c06b0","Type":"ContainerDied","Data":"f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8"} Jan 30 09:08:05 crc kubenswrapper[4758]: I0130 09:08:05.808891 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq4b9" event={"ID":"66af2af4-4490-4d94-b90f-c6c7c64c06b0","Type":"ContainerStarted","Data":"8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d"} Jan 30 09:08:05 crc kubenswrapper[4758]: I0130 09:08:05.829636 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bq4b9" podStartSLOduration=2.095345657 podStartE2EDuration="4.82961035s" podCreationTimestamp="2026-01-30 09:08:01 +0000 UTC" firstStartedPulling="2026-01-30 09:08:02.774998322 +0000 UTC m=+2287.747309873" lastFinishedPulling="2026-01-30 09:08:05.509263015 +0000 UTC m=+2290.481574566" observedRunningTime="2026-01-30 09:08:05.829331751 +0000 UTC m=+2290.801643312" watchObservedRunningTime="2026-01-30 09:08:05.82961035 +0000 UTC m=+2290.801921911" Jan 30 09:08:07 crc kubenswrapper[4758]: I0130 09:08:07.768793 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:08:07 crc kubenswrapper[4758]: E0130 09:08:07.769564 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:08:11 crc kubenswrapper[4758]: I0130 09:08:11.668868 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:11 crc kubenswrapper[4758]: I0130 09:08:11.669511 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:11 crc kubenswrapper[4758]: I0130 09:08:11.719795 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:11 crc kubenswrapper[4758]: I0130 09:08:11.897102 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:11 crc kubenswrapper[4758]: I0130 09:08:11.962803 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq4b9"] Jan 30 09:08:13 crc kubenswrapper[4758]: I0130 09:08:13.869741 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bq4b9" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerName="registry-server" containerID="cri-o://8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d" gracePeriod=2 Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.294372 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.395606 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfvsp\" (UniqueName: \"kubernetes.io/projected/66af2af4-4490-4d94-b90f-c6c7c64c06b0-kube-api-access-dfvsp\") pod \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.395755 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-utilities\") pod \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.395901 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-catalog-content\") pod \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.397028 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-utilities" (OuterVolumeSpecName: "utilities") pod "66af2af4-4490-4d94-b90f-c6c7c64c06b0" (UID: "66af2af4-4490-4d94-b90f-c6c7c64c06b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.404277 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66af2af4-4490-4d94-b90f-c6c7c64c06b0-kube-api-access-dfvsp" (OuterVolumeSpecName: "kube-api-access-dfvsp") pod "66af2af4-4490-4d94-b90f-c6c7c64c06b0" (UID: "66af2af4-4490-4d94-b90f-c6c7c64c06b0"). InnerVolumeSpecName "kube-api-access-dfvsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.417477 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66af2af4-4490-4d94-b90f-c6c7c64c06b0" (UID: "66af2af4-4490-4d94-b90f-c6c7c64c06b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.498317 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfvsp\" (UniqueName: \"kubernetes.io/projected/66af2af4-4490-4d94-b90f-c6c7c64c06b0-kube-api-access-dfvsp\") on node \"crc\" DevicePath \"\"" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.498372 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.498384 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.880964 4758 generic.go:334] "Generic (PLEG): container finished" podID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerID="8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d" exitCode=0 Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.881010 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq4b9" event={"ID":"66af2af4-4490-4d94-b90f-c6c7c64c06b0","Type":"ContainerDied","Data":"8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d"} Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.881111 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.881129 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq4b9" event={"ID":"66af2af4-4490-4d94-b90f-c6c7c64c06b0","Type":"ContainerDied","Data":"079f8dfa5914c4bc538986134200346f8bd8704f1601fa58efa1d5cb8fa02289"} Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.881153 4758 scope.go:117] "RemoveContainer" containerID="8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.906153 4758 scope.go:117] "RemoveContainer" containerID="f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.933352 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq4b9"] Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.953728 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq4b9"] Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.991111 4758 scope.go:117] "RemoveContainer" containerID="f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa" Jan 30 09:08:15 crc kubenswrapper[4758]: I0130 09:08:15.040120 4758 scope.go:117] "RemoveContainer" containerID="8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d" Jan 30 09:08:15 crc kubenswrapper[4758]: E0130 09:08:15.041098 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d\": container with ID starting with 8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d not found: ID does not exist" containerID="8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d" Jan 30 09:08:15 crc kubenswrapper[4758]: I0130 09:08:15.041152 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d"} err="failed to get container status \"8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d\": rpc error: code = NotFound desc = could not find container \"8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d\": container with ID starting with 8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d not found: ID does not exist" Jan 30 09:08:15 crc kubenswrapper[4758]: I0130 09:08:15.041182 4758 scope.go:117] "RemoveContainer" containerID="f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8" Jan 30 09:08:15 crc kubenswrapper[4758]: E0130 09:08:15.041571 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8\": container with ID starting with f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8 not found: ID does not exist" containerID="f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8" Jan 30 09:08:15 crc kubenswrapper[4758]: I0130 09:08:15.041609 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8"} err="failed to get container status \"f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8\": rpc error: code = NotFound desc = could not find container \"f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8\": container with ID starting with f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8 not found: ID does not exist" Jan 30 09:08:15 crc kubenswrapper[4758]: I0130 09:08:15.041696 4758 scope.go:117] "RemoveContainer" containerID="f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa" Jan 30 09:08:15 crc kubenswrapper[4758]: E0130 09:08:15.045448 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa\": container with ID starting with f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa not found: ID does not exist" containerID="f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa" Jan 30 09:08:15 crc kubenswrapper[4758]: I0130 09:08:15.045493 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa"} err="failed to get container status \"f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa\": rpc error: code = NotFound desc = could not find container \"f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa\": container with ID starting with f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa not found: ID does not exist" Jan 30 09:08:15 crc kubenswrapper[4758]: I0130 09:08:15.780811 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" path="/var/lib/kubelet/pods/66af2af4-4490-4d94-b90f-c6c7c64c06b0/volumes" Jan 30 09:08:20 crc kubenswrapper[4758]: I0130 09:08:20.768834 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:08:20 crc kubenswrapper[4758]: E0130 09:08:20.769944 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:08:34 crc kubenswrapper[4758]: I0130 09:08:34.769333 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:08:34 crc kubenswrapper[4758]: E0130 09:08:34.771370 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:08:46 crc kubenswrapper[4758]: I0130 09:08:46.769012 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:08:46 crc kubenswrapper[4758]: E0130 09:08:46.769837 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:08:55 crc kubenswrapper[4758]: I0130 09:08:55.920016 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g5jnd"] Jan 30 09:08:55 crc kubenswrapper[4758]: E0130 09:08:55.921158 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerName="extract-content" Jan 30 09:08:55 crc kubenswrapper[4758]: I0130 09:08:55.921178 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerName="extract-content" Jan 30 09:08:55 crc kubenswrapper[4758]: E0130 09:08:55.921220 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerName="registry-server" Jan 30 09:08:55 crc kubenswrapper[4758]: I0130 09:08:55.921229 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerName="registry-server" Jan 30 09:08:55 crc kubenswrapper[4758]: E0130 09:08:55.921256 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerName="extract-utilities" Jan 30 09:08:55 crc kubenswrapper[4758]: I0130 09:08:55.921265 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerName="extract-utilities" Jan 30 09:08:55 crc kubenswrapper[4758]: I0130 09:08:55.921502 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerName="registry-server" Jan 30 09:08:55 crc kubenswrapper[4758]: I0130 09:08:55.923384 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:55 crc kubenswrapper[4758]: I0130 09:08:55.952472 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g5jnd"] Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.052865 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swsbg\" (UniqueName: \"kubernetes.io/projected/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-kube-api-access-swsbg\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.052963 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-catalog-content\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.053008 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-utilities\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.154277 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-utilities\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.154446 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swsbg\" (UniqueName: \"kubernetes.io/projected/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-kube-api-access-swsbg\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.154504 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-catalog-content\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.154798 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-utilities\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.154891 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-catalog-content\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.183122 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swsbg\" (UniqueName: \"kubernetes.io/projected/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-kube-api-access-swsbg\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.249729 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.973230 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g5jnd"] Jan 30 09:08:57 crc kubenswrapper[4758]: I0130 09:08:57.235341 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jnd" event={"ID":"66507aad-a92a-4bc1-9ebf-0fc2de6b293c","Type":"ContainerStarted","Data":"59d86bb28b62612bf6fc017a25813c662c7653a23aa5fbf6ccfda81731cc8cb8"} Jan 30 09:08:58 crc kubenswrapper[4758]: I0130 09:08:58.244848 4758 generic.go:334] "Generic (PLEG): container finished" podID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerID="b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18" exitCode=0 Jan 30 09:08:58 crc kubenswrapper[4758]: I0130 09:08:58.244934 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jnd" event={"ID":"66507aad-a92a-4bc1-9ebf-0fc2de6b293c","Type":"ContainerDied","Data":"b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18"} Jan 30 09:08:58 crc kubenswrapper[4758]: I0130 09:08:58.246994 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:08:58 crc kubenswrapper[4758]: I0130 09:08:58.768956 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:08:58 crc kubenswrapper[4758]: E0130 09:08:58.769238 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:08:59 crc kubenswrapper[4758]: I0130 09:08:59.257744 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jnd" event={"ID":"66507aad-a92a-4bc1-9ebf-0fc2de6b293c","Type":"ContainerStarted","Data":"85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c"} Jan 30 09:09:01 crc kubenswrapper[4758]: I0130 09:09:01.275622 4758 generic.go:334] "Generic (PLEG): container finished" podID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerID="85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c" exitCode=0 Jan 30 09:09:01 crc kubenswrapper[4758]: I0130 09:09:01.275774 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jnd" event={"ID":"66507aad-a92a-4bc1-9ebf-0fc2de6b293c","Type":"ContainerDied","Data":"85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c"} Jan 30 09:09:02 crc kubenswrapper[4758]: I0130 09:09:02.288640 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jnd" event={"ID":"66507aad-a92a-4bc1-9ebf-0fc2de6b293c","Type":"ContainerStarted","Data":"6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d"} Jan 30 09:09:05 crc kubenswrapper[4758]: I0130 09:09:05.319447 4758 generic.go:334] "Generic (PLEG): container finished" podID="9db7f310-f803-4981-8a55-5d45e9015488" containerID="37ef71ddd39ff51e9fb156dd6e3018baa774422d52cbb49086cb98d66a9bd3e4" exitCode=0 Jan 30 09:09:05 crc kubenswrapper[4758]: I0130 09:09:05.319530 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" event={"ID":"9db7f310-f803-4981-8a55-5d45e9015488","Type":"ContainerDied","Data":"37ef71ddd39ff51e9fb156dd6e3018baa774422d52cbb49086cb98d66a9bd3e4"} Jan 30 09:09:05 crc kubenswrapper[4758]: I0130 09:09:05.352284 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g5jnd" podStartSLOduration=6.903151804 podStartE2EDuration="10.352259311s" podCreationTimestamp="2026-01-30 09:08:55 +0000 UTC" firstStartedPulling="2026-01-30 09:08:58.246743858 +0000 UTC m=+2343.219055409" lastFinishedPulling="2026-01-30 09:09:01.695851365 +0000 UTC m=+2346.668162916" observedRunningTime="2026-01-30 09:09:02.324979764 +0000 UTC m=+2347.297291325" watchObservedRunningTime="2026-01-30 09:09:05.352259311 +0000 UTC m=+2350.324570862" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.249863 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.250274 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.303512 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.376990 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.583353 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g5jnd"] Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.777558 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.953983 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ssh-key-openstack-edpm-ipam\") pod \"9db7f310-f803-4981-8a55-5d45e9015488\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.954058 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flnhg\" (UniqueName: \"kubernetes.io/projected/9db7f310-f803-4981-8a55-5d45e9015488-kube-api-access-flnhg\") pod \"9db7f310-f803-4981-8a55-5d45e9015488\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.954140 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9db7f310-f803-4981-8a55-5d45e9015488-ovncontroller-config-0\") pod \"9db7f310-f803-4981-8a55-5d45e9015488\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.954176 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ovn-combined-ca-bundle\") pod \"9db7f310-f803-4981-8a55-5d45e9015488\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.954278 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-inventory\") pod \"9db7f310-f803-4981-8a55-5d45e9015488\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.969308 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9db7f310-f803-4981-8a55-5d45e9015488-kube-api-access-flnhg" (OuterVolumeSpecName: "kube-api-access-flnhg") pod "9db7f310-f803-4981-8a55-5d45e9015488" (UID: "9db7f310-f803-4981-8a55-5d45e9015488"). InnerVolumeSpecName "kube-api-access-flnhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.969444 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9db7f310-f803-4981-8a55-5d45e9015488" (UID: "9db7f310-f803-4981-8a55-5d45e9015488"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.986921 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9db7f310-f803-4981-8a55-5d45e9015488-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "9db7f310-f803-4981-8a55-5d45e9015488" (UID: "9db7f310-f803-4981-8a55-5d45e9015488"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.990116 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9db7f310-f803-4981-8a55-5d45e9015488" (UID: "9db7f310-f803-4981-8a55-5d45e9015488"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.994242 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-inventory" (OuterVolumeSpecName: "inventory") pod "9db7f310-f803-4981-8a55-5d45e9015488" (UID: "9db7f310-f803-4981-8a55-5d45e9015488"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.056386 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.056431 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flnhg\" (UniqueName: \"kubernetes.io/projected/9db7f310-f803-4981-8a55-5d45e9015488-kube-api-access-flnhg\") on node \"crc\" DevicePath \"\"" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.056444 4758 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9db7f310-f803-4981-8a55-5d45e9015488-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.056456 4758 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.056469 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.352754 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.355383 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" event={"ID":"9db7f310-f803-4981-8a55-5d45e9015488","Type":"ContainerDied","Data":"43854d2e550b3a157e48ef29ea2036779fa7f76a1d19050140b07be00b088424"} Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.355468 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43854d2e550b3a157e48ef29ea2036779fa7f76a1d19050140b07be00b088424" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.452109 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr"] Jan 30 09:09:07 crc kubenswrapper[4758]: E0130 09:09:07.452622 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db7f310-f803-4981-8a55-5d45e9015488" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.452637 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db7f310-f803-4981-8a55-5d45e9015488" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.452893 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9db7f310-f803-4981-8a55-5d45e9015488" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.453745 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.466720 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.466719 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.466920 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.467000 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.467096 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.467274 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.495009 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr"] Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.585307 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.585395 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdtkd\" (UniqueName: \"kubernetes.io/projected/e377cd96-b016-4154-bae7-fa61f9be7472-kube-api-access-qdtkd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.585420 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.585533 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.585572 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.585614 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.687664 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdtkd\" (UniqueName: \"kubernetes.io/projected/e377cd96-b016-4154-bae7-fa61f9be7472-kube-api-access-qdtkd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.687722 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.687816 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.687857 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.687879 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.687929 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.697121 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.700997 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.701747 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.705585 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.706168 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.707754 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdtkd\" (UniqueName: \"kubernetes.io/projected/e377cd96-b016-4154-bae7-fa61f9be7472-kube-api-access-qdtkd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.785754 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.344637 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr"] Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.358214 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g5jnd" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerName="registry-server" containerID="cri-o://6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d" gracePeriod=2 Jan 30 09:09:08 crc kubenswrapper[4758]: W0130 09:09:08.359360 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode377cd96_b016_4154_bae7_fa61f9be7472.slice/crio-f9cd15aa091f5a79493fb7fb709bf927a05591981e7792bba85e90f9db43cb8d WatchSource:0}: Error finding container f9cd15aa091f5a79493fb7fb709bf927a05591981e7792bba85e90f9db43cb8d: Status 404 returned error can't find the container with id f9cd15aa091f5a79493fb7fb709bf927a05591981e7792bba85e90f9db43cb8d Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.766397 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.917077 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-utilities\") pod \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.917306 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-catalog-content\") pod \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.917394 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swsbg\" (UniqueName: \"kubernetes.io/projected/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-kube-api-access-swsbg\") pod \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.919193 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-utilities" (OuterVolumeSpecName: "utilities") pod "66507aad-a92a-4bc1-9ebf-0fc2de6b293c" (UID: "66507aad-a92a-4bc1-9ebf-0fc2de6b293c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.921350 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-kube-api-access-swsbg" (OuterVolumeSpecName: "kube-api-access-swsbg") pod "66507aad-a92a-4bc1-9ebf-0fc2de6b293c" (UID: "66507aad-a92a-4bc1-9ebf-0fc2de6b293c"). InnerVolumeSpecName "kube-api-access-swsbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.989264 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66507aad-a92a-4bc1-9ebf-0fc2de6b293c" (UID: "66507aad-a92a-4bc1-9ebf-0fc2de6b293c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.019945 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.019985 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.019997 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swsbg\" (UniqueName: \"kubernetes.io/projected/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-kube-api-access-swsbg\") on node \"crc\" DevicePath \"\"" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.368915 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" event={"ID":"e377cd96-b016-4154-bae7-fa61f9be7472","Type":"ContainerStarted","Data":"c57b9a11619b38b44cfb1cae3d8aab457c2a899787e7825908635f30c84cebc8"} Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.369279 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" event={"ID":"e377cd96-b016-4154-bae7-fa61f9be7472","Type":"ContainerStarted","Data":"f9cd15aa091f5a79493fb7fb709bf927a05591981e7792bba85e90f9db43cb8d"} Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.371980 4758 generic.go:334] "Generic (PLEG): container finished" podID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerID="6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d" exitCode=0 Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.372016 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jnd" event={"ID":"66507aad-a92a-4bc1-9ebf-0fc2de6b293c","Type":"ContainerDied","Data":"6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d"} Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.372059 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.372074 4758 scope.go:117] "RemoveContainer" containerID="6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.372061 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jnd" event={"ID":"66507aad-a92a-4bc1-9ebf-0fc2de6b293c","Type":"ContainerDied","Data":"59d86bb28b62612bf6fc017a25813c662c7653a23aa5fbf6ccfda81731cc8cb8"} Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.388274 4758 scope.go:117] "RemoveContainer" containerID="85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.391789 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" podStartSLOduration=1.934603492 podStartE2EDuration="2.391762861s" podCreationTimestamp="2026-01-30 09:09:07 +0000 UTC" firstStartedPulling="2026-01-30 09:09:08.364569183 +0000 UTC m=+2353.336880734" lastFinishedPulling="2026-01-30 09:09:08.821728562 +0000 UTC m=+2353.794040103" observedRunningTime="2026-01-30 09:09:09.388553342 +0000 UTC m=+2354.360864903" watchObservedRunningTime="2026-01-30 09:09:09.391762861 +0000 UTC m=+2354.364074422" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.413889 4758 scope.go:117] "RemoveContainer" containerID="b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.461357 4758 scope.go:117] "RemoveContainer" containerID="6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d" Jan 30 09:09:09 crc kubenswrapper[4758]: E0130 09:09:09.462072 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d\": container with ID starting with 6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d not found: ID does not exist" containerID="6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.462116 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d"} err="failed to get container status \"6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d\": rpc error: code = NotFound desc = could not find container \"6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d\": container with ID starting with 6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d not found: ID does not exist" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.462145 4758 scope.go:117] "RemoveContainer" containerID="85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c" Jan 30 09:09:09 crc kubenswrapper[4758]: E0130 09:09:09.462452 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c\": container with ID starting with 85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c not found: ID does not exist" containerID="85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.462488 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c"} err="failed to get container status \"85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c\": rpc error: code = NotFound desc = could not find container \"85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c\": container with ID starting with 85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c not found: ID does not exist" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.462506 4758 scope.go:117] "RemoveContainer" containerID="b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18" Jan 30 09:09:09 crc kubenswrapper[4758]: E0130 09:09:09.462858 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18\": container with ID starting with b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18 not found: ID does not exist" containerID="b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.462884 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18"} err="failed to get container status \"b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18\": rpc error: code = NotFound desc = could not find container \"b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18\": container with ID starting with b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18 not found: ID does not exist" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.469926 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g5jnd"] Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.478706 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g5jnd"] Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.777633 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" path="/var/lib/kubelet/pods/66507aad-a92a-4bc1-9ebf-0fc2de6b293c/volumes" Jan 30 09:09:10 crc kubenswrapper[4758]: I0130 09:09:10.768848 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:09:10 crc kubenswrapper[4758]: E0130 09:09:10.769211 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:09:23 crc kubenswrapper[4758]: I0130 09:09:23.769563 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:09:23 crc kubenswrapper[4758]: E0130 09:09:23.770422 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:09:35 crc kubenswrapper[4758]: I0130 09:09:35.775940 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:09:35 crc kubenswrapper[4758]: E0130 09:09:35.776858 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:09:48 crc kubenswrapper[4758]: I0130 09:09:48.769512 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:09:48 crc kubenswrapper[4758]: E0130 09:09:48.770394 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:09:59 crc kubenswrapper[4758]: I0130 09:09:59.768563 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:09:59 crc kubenswrapper[4758]: E0130 09:09:59.769293 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:09:59 crc kubenswrapper[4758]: I0130 09:09:59.797224 4758 generic.go:334] "Generic (PLEG): container finished" podID="e377cd96-b016-4154-bae7-fa61f9be7472" containerID="c57b9a11619b38b44cfb1cae3d8aab457c2a899787e7825908635f30c84cebc8" exitCode=0 Jan 30 09:09:59 crc kubenswrapper[4758]: I0130 09:09:59.797603 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" event={"ID":"e377cd96-b016-4154-bae7-fa61f9be7472","Type":"ContainerDied","Data":"c57b9a11619b38b44cfb1cae3d8aab457c2a899787e7825908635f30c84cebc8"} Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.234908 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.286594 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-nova-metadata-neutron-config-0\") pod \"e377cd96-b016-4154-bae7-fa61f9be7472\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.286653 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e377cd96-b016-4154-bae7-fa61f9be7472\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.286678 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-inventory\") pod \"e377cd96-b016-4154-bae7-fa61f9be7472\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.286717 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdtkd\" (UniqueName: \"kubernetes.io/projected/e377cd96-b016-4154-bae7-fa61f9be7472-kube-api-access-qdtkd\") pod \"e377cd96-b016-4154-bae7-fa61f9be7472\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.286747 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-ssh-key-openstack-edpm-ipam\") pod \"e377cd96-b016-4154-bae7-fa61f9be7472\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.286817 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-metadata-combined-ca-bundle\") pod \"e377cd96-b016-4154-bae7-fa61f9be7472\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.294640 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e377cd96-b016-4154-bae7-fa61f9be7472-kube-api-access-qdtkd" (OuterVolumeSpecName: "kube-api-access-qdtkd") pod "e377cd96-b016-4154-bae7-fa61f9be7472" (UID: "e377cd96-b016-4154-bae7-fa61f9be7472"). InnerVolumeSpecName "kube-api-access-qdtkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.307452 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e377cd96-b016-4154-bae7-fa61f9be7472" (UID: "e377cd96-b016-4154-bae7-fa61f9be7472"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.320482 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-inventory" (OuterVolumeSpecName: "inventory") pod "e377cd96-b016-4154-bae7-fa61f9be7472" (UID: "e377cd96-b016-4154-bae7-fa61f9be7472"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.322370 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e377cd96-b016-4154-bae7-fa61f9be7472" (UID: "e377cd96-b016-4154-bae7-fa61f9be7472"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.328198 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e377cd96-b016-4154-bae7-fa61f9be7472" (UID: "e377cd96-b016-4154-bae7-fa61f9be7472"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.335873 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e377cd96-b016-4154-bae7-fa61f9be7472" (UID: "e377cd96-b016-4154-bae7-fa61f9be7472"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.389213 4758 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.389246 4758 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.389258 4758 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.389270 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.389279 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdtkd\" (UniqueName: \"kubernetes.io/projected/e377cd96-b016-4154-bae7-fa61f9be7472-kube-api-access-qdtkd\") on node \"crc\" DevicePath \"\"" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.389287 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.818806 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" event={"ID":"e377cd96-b016-4154-bae7-fa61f9be7472","Type":"ContainerDied","Data":"f9cd15aa091f5a79493fb7fb709bf927a05591981e7792bba85e90f9db43cb8d"} Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.819415 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9cd15aa091f5a79493fb7fb709bf927a05591981e7792bba85e90f9db43cb8d" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.819104 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.912827 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv"] Jan 30 09:10:01 crc kubenswrapper[4758]: E0130 09:10:01.913443 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerName="extract-utilities" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.913492 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerName="extract-utilities" Jan 30 09:10:01 crc kubenswrapper[4758]: E0130 09:10:01.913525 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e377cd96-b016-4154-bae7-fa61f9be7472" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.913537 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e377cd96-b016-4154-bae7-fa61f9be7472" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 09:10:01 crc kubenswrapper[4758]: E0130 09:10:01.913568 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerName="registry-server" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.913576 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerName="registry-server" Jan 30 09:10:01 crc kubenswrapper[4758]: E0130 09:10:01.913594 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerName="extract-content" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.913603 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerName="extract-content" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.913827 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerName="registry-server" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.913860 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e377cd96-b016-4154-bae7-fa61f9be7472" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.914813 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.916987 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.917815 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.918128 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.918493 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.918814 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.933420 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv"] Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.003899 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.004238 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.004382 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.004551 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsfgp\" (UniqueName: \"kubernetes.io/projected/35efb0cc-1bf4-4052-af18-b206ea052f80-kube-api-access-bsfgp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.004662 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.105972 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.106032 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.106105 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsfgp\" (UniqueName: \"kubernetes.io/projected/35efb0cc-1bf4-4052-af18-b206ea052f80-kube-api-access-bsfgp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.106128 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.106157 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.111161 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.111681 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.111752 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.111775 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.133660 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsfgp\" (UniqueName: \"kubernetes.io/projected/35efb0cc-1bf4-4052-af18-b206ea052f80-kube-api-access-bsfgp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.239398 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.602676 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv"] Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.828573 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" event={"ID":"35efb0cc-1bf4-4052-af18-b206ea052f80","Type":"ContainerStarted","Data":"30133c0490ee20fac56a6741bd65b03b57f8a87d72d29e53c79df6b9f00067cc"} Jan 30 09:10:03 crc kubenswrapper[4758]: I0130 09:10:03.837778 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" event={"ID":"35efb0cc-1bf4-4052-af18-b206ea052f80","Type":"ContainerStarted","Data":"86ec62d0d5fcf1e8a74a7d83e79533755a8f09adadadc64e5d1d41a69ce36daf"} Jan 30 09:10:03 crc kubenswrapper[4758]: I0130 09:10:03.866752 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" podStartSLOduration=2.409331427 podStartE2EDuration="2.866734199s" podCreationTimestamp="2026-01-30 09:10:01 +0000 UTC" firstStartedPulling="2026-01-30 09:10:02.62925752 +0000 UTC m=+2407.601569071" lastFinishedPulling="2026-01-30 09:10:03.086660292 +0000 UTC m=+2408.058971843" observedRunningTime="2026-01-30 09:10:03.860397951 +0000 UTC m=+2408.832709512" watchObservedRunningTime="2026-01-30 09:10:03.866734199 +0000 UTC m=+2408.839045750" Jan 30 09:10:13 crc kubenswrapper[4758]: I0130 09:10:13.769397 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:10:13 crc kubenswrapper[4758]: E0130 09:10:13.770545 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:10:25 crc kubenswrapper[4758]: I0130 09:10:25.768760 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:10:25 crc kubenswrapper[4758]: E0130 09:10:25.769695 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:10:40 crc kubenswrapper[4758]: I0130 09:10:40.768344 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:10:40 crc kubenswrapper[4758]: E0130 09:10:40.768954 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:10:55 crc kubenswrapper[4758]: I0130 09:10:55.775453 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:10:55 crc kubenswrapper[4758]: E0130 09:10:55.776224 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:11:07 crc kubenswrapper[4758]: I0130 09:11:07.769189 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:11:07 crc kubenswrapper[4758]: E0130 09:11:07.769930 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:11:22 crc kubenswrapper[4758]: I0130 09:11:22.769316 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:11:23 crc kubenswrapper[4758]: I0130 09:11:23.493804 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"2cc215bdce1ca5af9f8c68a84b985e34df78173fce9f59d165b301292c9ca0d7"} Jan 30 09:13:22 crc kubenswrapper[4758]: I0130 09:13:22.386933 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:13:22 crc kubenswrapper[4758]: I0130 09:13:22.387658 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:13:52 crc kubenswrapper[4758]: I0130 09:13:52.387207 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:13:52 crc kubenswrapper[4758]: I0130 09:13:52.387835 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:14:14 crc kubenswrapper[4758]: I0130 09:14:14.952236 4758 generic.go:334] "Generic (PLEG): container finished" podID="35efb0cc-1bf4-4052-af18-b206ea052f80" containerID="86ec62d0d5fcf1e8a74a7d83e79533755a8f09adadadc64e5d1d41a69ce36daf" exitCode=0 Jan 30 09:14:14 crc kubenswrapper[4758]: I0130 09:14:14.952319 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" event={"ID":"35efb0cc-1bf4-4052-af18-b206ea052f80","Type":"ContainerDied","Data":"86ec62d0d5fcf1e8a74a7d83e79533755a8f09adadadc64e5d1d41a69ce36daf"} Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.421968 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.459273 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-ssh-key-openstack-edpm-ipam\") pod \"35efb0cc-1bf4-4052-af18-b206ea052f80\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.459368 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-combined-ca-bundle\") pod \"35efb0cc-1bf4-4052-af18-b206ea052f80\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.459416 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-inventory\") pod \"35efb0cc-1bf4-4052-af18-b206ea052f80\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.459556 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsfgp\" (UniqueName: \"kubernetes.io/projected/35efb0cc-1bf4-4052-af18-b206ea052f80-kube-api-access-bsfgp\") pod \"35efb0cc-1bf4-4052-af18-b206ea052f80\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.459574 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-secret-0\") pod \"35efb0cc-1bf4-4052-af18-b206ea052f80\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.467523 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "35efb0cc-1bf4-4052-af18-b206ea052f80" (UID: "35efb0cc-1bf4-4052-af18-b206ea052f80"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.468651 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35efb0cc-1bf4-4052-af18-b206ea052f80-kube-api-access-bsfgp" (OuterVolumeSpecName: "kube-api-access-bsfgp") pod "35efb0cc-1bf4-4052-af18-b206ea052f80" (UID: "35efb0cc-1bf4-4052-af18-b206ea052f80"). InnerVolumeSpecName "kube-api-access-bsfgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.501291 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-inventory" (OuterVolumeSpecName: "inventory") pod "35efb0cc-1bf4-4052-af18-b206ea052f80" (UID: "35efb0cc-1bf4-4052-af18-b206ea052f80"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.515223 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "35efb0cc-1bf4-4052-af18-b206ea052f80" (UID: "35efb0cc-1bf4-4052-af18-b206ea052f80"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.525260 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "35efb0cc-1bf4-4052-af18-b206ea052f80" (UID: "35efb0cc-1bf4-4052-af18-b206ea052f80"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.560979 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.561012 4758 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.561024 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.561048 4758 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.561057 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsfgp\" (UniqueName: \"kubernetes.io/projected/35efb0cc-1bf4-4052-af18-b206ea052f80-kube-api-access-bsfgp\") on node \"crc\" DevicePath \"\"" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.970614 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" event={"ID":"35efb0cc-1bf4-4052-af18-b206ea052f80","Type":"ContainerDied","Data":"30133c0490ee20fac56a6741bd65b03b57f8a87d72d29e53c79df6b9f00067cc"} Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.970659 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30133c0490ee20fac56a6741bd65b03b57f8a87d72d29e53c79df6b9f00067cc" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.970674 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.144803 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z"] Jan 30 09:14:17 crc kubenswrapper[4758]: E0130 09:14:17.145236 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35efb0cc-1bf4-4052-af18-b206ea052f80" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.145252 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="35efb0cc-1bf4-4052-af18-b206ea052f80" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.145441 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="35efb0cc-1bf4-4052-af18-b206ea052f80" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.146056 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.148652 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.148820 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.153125 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.153204 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.153261 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.153453 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.162425 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.169948 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z"] Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171398 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171459 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171501 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171601 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171653 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cq5v\" (UniqueName: \"kubernetes.io/projected/48c9d5d6-6dc5-4848-bade-3c302106b074-kube-api-access-4cq5v\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171685 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171727 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171769 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171795 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274341 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274436 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274480 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cq5v\" (UniqueName: \"kubernetes.io/projected/48c9d5d6-6dc5-4848-bade-3c302106b074-kube-api-access-4cq5v\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274512 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274549 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274581 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274604 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274652 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274681 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.278688 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.278876 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.279089 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.279206 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.280001 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.281222 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.281929 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.283690 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.293977 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cq5v\" (UniqueName: \"kubernetes.io/projected/48c9d5d6-6dc5-4848-bade-3c302106b074-kube-api-access-4cq5v\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.474278 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.981314 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z"] Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.991351 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:14:18 crc kubenswrapper[4758]: I0130 09:14:18.992957 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" event={"ID":"48c9d5d6-6dc5-4848-bade-3c302106b074","Type":"ContainerStarted","Data":"7d5419d178211565e085acf1cdb2716a8786ff13a1a1ac8b9c2a26dd960a1501"} Jan 30 09:14:20 crc kubenswrapper[4758]: I0130 09:14:20.004623 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" event={"ID":"48c9d5d6-6dc5-4848-bade-3c302106b074","Type":"ContainerStarted","Data":"1d79080e887299522bedac745c89bf263e53d4b14d66aaebf1cebb95d50114f0"} Jan 30 09:14:20 crc kubenswrapper[4758]: I0130 09:14:20.066822 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" podStartSLOduration=2.050482201 podStartE2EDuration="3.066792321s" podCreationTimestamp="2026-01-30 09:14:17 +0000 UTC" firstStartedPulling="2026-01-30 09:14:17.991151937 +0000 UTC m=+2662.963463488" lastFinishedPulling="2026-01-30 09:14:19.007462057 +0000 UTC m=+2663.979773608" observedRunningTime="2026-01-30 09:14:20.060291936 +0000 UTC m=+2665.032603487" watchObservedRunningTime="2026-01-30 09:14:20.066792321 +0000 UTC m=+2665.039103872" Jan 30 09:14:22 crc kubenswrapper[4758]: I0130 09:14:22.387670 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:14:22 crc kubenswrapper[4758]: I0130 09:14:22.388060 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:14:22 crc kubenswrapper[4758]: I0130 09:14:22.388118 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:14:22 crc kubenswrapper[4758]: I0130 09:14:22.388986 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2cc215bdce1ca5af9f8c68a84b985e34df78173fce9f59d165b301292c9ca0d7"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:14:22 crc kubenswrapper[4758]: I0130 09:14:22.389067 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://2cc215bdce1ca5af9f8c68a84b985e34df78173fce9f59d165b301292c9ca0d7" gracePeriod=600 Jan 30 09:14:23 crc kubenswrapper[4758]: I0130 09:14:23.029762 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="2cc215bdce1ca5af9f8c68a84b985e34df78173fce9f59d165b301292c9ca0d7" exitCode=0 Jan 30 09:14:23 crc kubenswrapper[4758]: I0130 09:14:23.029825 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"2cc215bdce1ca5af9f8c68a84b985e34df78173fce9f59d165b301292c9ca0d7"} Jan 30 09:14:23 crc kubenswrapper[4758]: I0130 09:14:23.030520 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202"} Jan 30 09:14:23 crc kubenswrapper[4758]: I0130 09:14:23.030562 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.153353 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84"] Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.156571 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.167021 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.167322 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.171474 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84"] Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.193595 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlx25\" (UniqueName: \"kubernetes.io/projected/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-kube-api-access-jlx25\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.193664 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-secret-volume\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.193706 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-config-volume\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.295998 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlx25\" (UniqueName: \"kubernetes.io/projected/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-kube-api-access-jlx25\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.296128 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-secret-volume\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.296169 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-config-volume\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.296960 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-config-volume\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.304637 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-secret-volume\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.315555 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlx25\" (UniqueName: \"kubernetes.io/projected/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-kube-api-access-jlx25\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.477669 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.927500 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84"] Jan 30 09:15:01 crc kubenswrapper[4758]: I0130 09:15:01.340241 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" event={"ID":"31c0d7e9-81d5-4f36-bc9e-56e22d853f85","Type":"ContainerStarted","Data":"32d29d08b6aebab1b513110ea8db75577887cfa1b8e5f32a8aaa6014efa7ad2d"} Jan 30 09:15:01 crc kubenswrapper[4758]: I0130 09:15:01.340616 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" event={"ID":"31c0d7e9-81d5-4f36-bc9e-56e22d853f85","Type":"ContainerStarted","Data":"7b796af45ea35cdc61b7e6ab04e9005acf14c49e11c349f5312e3103a2deca8a"} Jan 30 09:15:01 crc kubenswrapper[4758]: I0130 09:15:01.370761 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" podStartSLOduration=1.370612549 podStartE2EDuration="1.370612549s" podCreationTimestamp="2026-01-30 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 09:15:01.35631238 +0000 UTC m=+2706.328623951" watchObservedRunningTime="2026-01-30 09:15:01.370612549 +0000 UTC m=+2706.342924100" Jan 30 09:15:02 crc kubenswrapper[4758]: I0130 09:15:02.351563 4758 generic.go:334] "Generic (PLEG): container finished" podID="31c0d7e9-81d5-4f36-bc9e-56e22d853f85" containerID="32d29d08b6aebab1b513110ea8db75577887cfa1b8e5f32a8aaa6014efa7ad2d" exitCode=0 Jan 30 09:15:02 crc kubenswrapper[4758]: I0130 09:15:02.353646 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" event={"ID":"31c0d7e9-81d5-4f36-bc9e-56e22d853f85","Type":"ContainerDied","Data":"32d29d08b6aebab1b513110ea8db75577887cfa1b8e5f32a8aaa6014efa7ad2d"} Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.672312 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.770130 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-config-volume\") pod \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.770292 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlx25\" (UniqueName: \"kubernetes.io/projected/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-kube-api-access-jlx25\") pod \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.770953 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-config-volume" (OuterVolumeSpecName: "config-volume") pod "31c0d7e9-81d5-4f36-bc9e-56e22d853f85" (UID: "31c0d7e9-81d5-4f36-bc9e-56e22d853f85"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.771452 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-secret-volume\") pod \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.772298 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.776181 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "31c0d7e9-81d5-4f36-bc9e-56e22d853f85" (UID: "31c0d7e9-81d5-4f36-bc9e-56e22d853f85"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.777440 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-kube-api-access-jlx25" (OuterVolumeSpecName: "kube-api-access-jlx25") pod "31c0d7e9-81d5-4f36-bc9e-56e22d853f85" (UID: "31c0d7e9-81d5-4f36-bc9e-56e22d853f85"). InnerVolumeSpecName "kube-api-access-jlx25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.874307 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlx25\" (UniqueName: \"kubernetes.io/projected/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-kube-api-access-jlx25\") on node \"crc\" DevicePath \"\"" Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.874337 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:15:04 crc kubenswrapper[4758]: I0130 09:15:04.369905 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" event={"ID":"31c0d7e9-81d5-4f36-bc9e-56e22d853f85","Type":"ContainerDied","Data":"7b796af45ea35cdc61b7e6ab04e9005acf14c49e11c349f5312e3103a2deca8a"} Jan 30 09:15:04 crc kubenswrapper[4758]: I0130 09:15:04.370300 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b796af45ea35cdc61b7e6ab04e9005acf14c49e11c349f5312e3103a2deca8a" Jan 30 09:15:04 crc kubenswrapper[4758]: I0130 09:15:04.370129 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:04 crc kubenswrapper[4758]: I0130 09:15:04.437315 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz"] Jan 30 09:15:04 crc kubenswrapper[4758]: I0130 09:15:04.446852 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz"] Jan 30 09:15:05 crc kubenswrapper[4758]: I0130 09:15:05.782719 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9313ed67-0218-4d32-adf7-710ba67de622" path="/var/lib/kubelet/pods/9313ed67-0218-4d32-adf7-710ba67de622/volumes" Jan 30 09:15:10 crc kubenswrapper[4758]: I0130 09:15:10.214687 4758 scope.go:117] "RemoveContainer" containerID="166bda0445ccb2dd7e9715331b62d1e0995dd20e40dce82e327a772dd4e0caf9" Jan 30 09:15:20 crc kubenswrapper[4758]: I0130 09:15:20.822836 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7xp27"] Jan 30 09:15:20 crc kubenswrapper[4758]: E0130 09:15:20.823747 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c0d7e9-81d5-4f36-bc9e-56e22d853f85" containerName="collect-profiles" Jan 30 09:15:20 crc kubenswrapper[4758]: I0130 09:15:20.823761 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c0d7e9-81d5-4f36-bc9e-56e22d853f85" containerName="collect-profiles" Jan 30 09:15:20 crc kubenswrapper[4758]: I0130 09:15:20.823982 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c0d7e9-81d5-4f36-bc9e-56e22d853f85" containerName="collect-profiles" Jan 30 09:15:20 crc kubenswrapper[4758]: I0130 09:15:20.826565 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:20 crc kubenswrapper[4758]: I0130 09:15:20.852362 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7xp27"] Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.020429 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jntk8\" (UniqueName: \"kubernetes.io/projected/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-kube-api-access-jntk8\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.020473 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-catalog-content\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.020653 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-utilities\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.122269 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jntk8\" (UniqueName: \"kubernetes.io/projected/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-kube-api-access-jntk8\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.122318 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-catalog-content\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.122408 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-utilities\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.122937 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-utilities\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.124112 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-catalog-content\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.143536 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jntk8\" (UniqueName: \"kubernetes.io/projected/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-kube-api-access-jntk8\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.149301 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.663557 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7xp27"] Jan 30 09:15:22 crc kubenswrapper[4758]: I0130 09:15:22.532604 4758 generic.go:334] "Generic (PLEG): container finished" podID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerID="44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6" exitCode=0 Jan 30 09:15:22 crc kubenswrapper[4758]: I0130 09:15:22.532706 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xp27" event={"ID":"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe","Type":"ContainerDied","Data":"44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6"} Jan 30 09:15:22 crc kubenswrapper[4758]: I0130 09:15:22.534551 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xp27" event={"ID":"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe","Type":"ContainerStarted","Data":"fe42b9a116c8937d09bddab48d970d2230399a89728a7a74032adb1a823c994d"} Jan 30 09:15:23 crc kubenswrapper[4758]: I0130 09:15:23.544679 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xp27" event={"ID":"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe","Type":"ContainerStarted","Data":"402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3"} Jan 30 09:15:29 crc kubenswrapper[4758]: I0130 09:15:29.605231 4758 generic.go:334] "Generic (PLEG): container finished" podID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerID="402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3" exitCode=0 Jan 30 09:15:29 crc kubenswrapper[4758]: I0130 09:15:29.605284 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xp27" event={"ID":"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe","Type":"ContainerDied","Data":"402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3"} Jan 30 09:15:30 crc kubenswrapper[4758]: I0130 09:15:30.616573 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xp27" event={"ID":"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe","Type":"ContainerStarted","Data":"9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2"} Jan 30 09:15:30 crc kubenswrapper[4758]: I0130 09:15:30.650388 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7xp27" podStartSLOduration=3.151292853 podStartE2EDuration="10.649721462s" podCreationTimestamp="2026-01-30 09:15:20 +0000 UTC" firstStartedPulling="2026-01-30 09:15:22.534535575 +0000 UTC m=+2727.506847126" lastFinishedPulling="2026-01-30 09:15:30.032964184 +0000 UTC m=+2735.005275735" observedRunningTime="2026-01-30 09:15:30.638798728 +0000 UTC m=+2735.611110289" watchObservedRunningTime="2026-01-30 09:15:30.649721462 +0000 UTC m=+2735.622033013" Jan 30 09:15:31 crc kubenswrapper[4758]: I0130 09:15:31.149485 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:31 crc kubenswrapper[4758]: I0130 09:15:31.149580 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:32 crc kubenswrapper[4758]: I0130 09:15:32.204490 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7xp27" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="registry-server" probeResult="failure" output=< Jan 30 09:15:32 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:15:32 crc kubenswrapper[4758]: > Jan 30 09:15:41 crc kubenswrapper[4758]: I0130 09:15:41.205284 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:41 crc kubenswrapper[4758]: I0130 09:15:41.265681 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:41 crc kubenswrapper[4758]: I0130 09:15:41.784035 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7xp27"] Jan 30 09:15:42 crc kubenswrapper[4758]: I0130 09:15:42.726167 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7xp27" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="registry-server" containerID="cri-o://9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2" gracePeriod=2 Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.184291 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.360271 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-catalog-content\") pod \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.360579 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-utilities\") pod \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.360696 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jntk8\" (UniqueName: \"kubernetes.io/projected/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-kube-api-access-jntk8\") pod \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.361361 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-utilities" (OuterVolumeSpecName: "utilities") pod "97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" (UID: "97c54792-a1ff-4ad3-bcf5-9c2562b25dfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.368235 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-kube-api-access-jntk8" (OuterVolumeSpecName: "kube-api-access-jntk8") pod "97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" (UID: "97c54792-a1ff-4ad3-bcf5-9c2562b25dfe"). InnerVolumeSpecName "kube-api-access-jntk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.462858 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.462885 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jntk8\" (UniqueName: \"kubernetes.io/projected/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-kube-api-access-jntk8\") on node \"crc\" DevicePath \"\"" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.489638 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" (UID: "97c54792-a1ff-4ad3-bcf5-9c2562b25dfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.565658 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.737885 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xp27" event={"ID":"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe","Type":"ContainerDied","Data":"9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2"} Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.738024 4758 scope.go:117] "RemoveContainer" containerID="9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.737965 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.737816 4758 generic.go:334] "Generic (PLEG): container finished" podID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerID="9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2" exitCode=0 Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.738795 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xp27" event={"ID":"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe","Type":"ContainerDied","Data":"fe42b9a116c8937d09bddab48d970d2230399a89728a7a74032adb1a823c994d"} Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.764748 4758 scope.go:117] "RemoveContainer" containerID="402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.792111 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7xp27"] Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.792339 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7xp27"] Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.801988 4758 scope.go:117] "RemoveContainer" containerID="44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.846306 4758 scope.go:117] "RemoveContainer" containerID="9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2" Jan 30 09:15:43 crc kubenswrapper[4758]: E0130 09:15:43.846654 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2\": container with ID starting with 9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2 not found: ID does not exist" containerID="9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.846692 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2"} err="failed to get container status \"9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2\": rpc error: code = NotFound desc = could not find container \"9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2\": container with ID starting with 9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2 not found: ID does not exist" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.846717 4758 scope.go:117] "RemoveContainer" containerID="402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3" Jan 30 09:15:43 crc kubenswrapper[4758]: E0130 09:15:43.847096 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3\": container with ID starting with 402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3 not found: ID does not exist" containerID="402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.847162 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3"} err="failed to get container status \"402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3\": rpc error: code = NotFound desc = could not find container \"402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3\": container with ID starting with 402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3 not found: ID does not exist" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.847302 4758 scope.go:117] "RemoveContainer" containerID="44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6" Jan 30 09:15:43 crc kubenswrapper[4758]: E0130 09:15:43.847656 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6\": container with ID starting with 44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6 not found: ID does not exist" containerID="44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.847682 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6"} err="failed to get container status \"44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6\": rpc error: code = NotFound desc = could not find container \"44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6\": container with ID starting with 44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6 not found: ID does not exist" Jan 30 09:15:45 crc kubenswrapper[4758]: I0130 09:15:45.779651 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" path="/var/lib/kubelet/pods/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe/volumes" Jan 30 09:16:22 crc kubenswrapper[4758]: I0130 09:16:22.387641 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:16:22 crc kubenswrapper[4758]: I0130 09:16:22.388154 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:16:52 crc kubenswrapper[4758]: I0130 09:16:52.387242 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:16:52 crc kubenswrapper[4758]: I0130 09:16:52.387797 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:16:53 crc kubenswrapper[4758]: I0130 09:16:53.306542 4758 generic.go:334] "Generic (PLEG): container finished" podID="48c9d5d6-6dc5-4848-bade-3c302106b074" containerID="1d79080e887299522bedac745c89bf263e53d4b14d66aaebf1cebb95d50114f0" exitCode=0 Jan 30 09:16:53 crc kubenswrapper[4758]: I0130 09:16:53.306588 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" event={"ID":"48c9d5d6-6dc5-4848-bade-3c302106b074","Type":"ContainerDied","Data":"1d79080e887299522bedac745c89bf263e53d4b14d66aaebf1cebb95d50114f0"} Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.737972 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.758089 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-extra-config-0\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.758233 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-0\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.758436 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-0\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.758488 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-combined-ca-bundle\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.758541 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-1\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.758585 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-1\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.758726 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cq5v\" (UniqueName: \"kubernetes.io/projected/48c9d5d6-6dc5-4848-bade-3c302106b074-kube-api-access-4cq5v\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.758994 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-ssh-key-openstack-edpm-ipam\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.759324 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-inventory\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.783934 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.789977 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.792744 4758 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.796181 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c9d5d6-6dc5-4848-bade-3c302106b074-kube-api-access-4cq5v" (OuterVolumeSpecName: "kube-api-access-4cq5v") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "kube-api-access-4cq5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.830796 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.835227 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.838916 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.865470 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.868075 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.873822 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-inventory" (OuterVolumeSpecName: "inventory") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.894281 4758 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.894317 4758 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.894327 4758 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.894335 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cq5v\" (UniqueName: \"kubernetes.io/projected/48c9d5d6-6dc5-4848-bade-3c302106b074-kube-api-access-4cq5v\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.894353 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.894363 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.894373 4758 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.894381 4758 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.323397 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" event={"ID":"48c9d5d6-6dc5-4848-bade-3c302106b074","Type":"ContainerDied","Data":"7d5419d178211565e085acf1cdb2716a8786ff13a1a1ac8b9c2a26dd960a1501"} Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.323442 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d5419d178211565e085acf1cdb2716a8786ff13a1a1ac8b9c2a26dd960a1501" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.323494 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.439233 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn"] Jan 30 09:16:55 crc kubenswrapper[4758]: E0130 09:16:55.439628 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c9d5d6-6dc5-4848-bade-3c302106b074" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.439646 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c9d5d6-6dc5-4848-bade-3c302106b074" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 09:16:55 crc kubenswrapper[4758]: E0130 09:16:55.439660 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="extract-content" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.439667 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="extract-content" Jan 30 09:16:55 crc kubenswrapper[4758]: E0130 09:16:55.439675 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="extract-utilities" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.439681 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="extract-utilities" Jan 30 09:16:55 crc kubenswrapper[4758]: E0130 09:16:55.439688 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="registry-server" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.439694 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="registry-server" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.439865 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c9d5d6-6dc5-4848-bade-3c302106b074" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.439886 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="registry-server" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.440497 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.448508 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.448585 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.448638 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.448809 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.448855 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.451155 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn"] Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.503578 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.503687 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.503712 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.503744 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.503763 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdvbl\" (UniqueName: \"kubernetes.io/projected/38932896-a566-4440-b672-33909cb638b0-kube-api-access-qdvbl\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.503803 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.503833 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.605276 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.605344 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.605395 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.605423 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdvbl\" (UniqueName: \"kubernetes.io/projected/38932896-a566-4440-b672-33909cb638b0-kube-api-access-qdvbl\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.605483 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.605518 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.605581 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.609408 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.609595 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.609917 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.610217 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.611161 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.614168 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.622352 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdvbl\" (UniqueName: \"kubernetes.io/projected/38932896-a566-4440-b672-33909cb638b0-kube-api-access-qdvbl\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.763673 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:56 crc kubenswrapper[4758]: I0130 09:16:56.283215 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn"] Jan 30 09:16:56 crc kubenswrapper[4758]: I0130 09:16:56.337412 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" event={"ID":"38932896-a566-4440-b672-33909cb638b0","Type":"ContainerStarted","Data":"4ad968b66da815adc412e79bec231c9a8b88fe94c19e206b409cd68c1ca8ab01"} Jan 30 09:16:56 crc kubenswrapper[4758]: I0130 09:16:56.732368 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:16:57 crc kubenswrapper[4758]: I0130 09:16:57.348196 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" event={"ID":"38932896-a566-4440-b672-33909cb638b0","Type":"ContainerStarted","Data":"eb7433698d4689365258f066619568b60512753789e2f4616d12bb0530207c37"} Jan 30 09:16:57 crc kubenswrapper[4758]: I0130 09:16:57.369354 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" podStartSLOduration=1.9226421820000001 podStartE2EDuration="2.369333689s" podCreationTimestamp="2026-01-30 09:16:55 +0000 UTC" firstStartedPulling="2026-01-30 09:16:56.282434237 +0000 UTC m=+2821.254745788" lastFinishedPulling="2026-01-30 09:16:56.729125744 +0000 UTC m=+2821.701437295" observedRunningTime="2026-01-30 09:16:57.36620413 +0000 UTC m=+2822.338515681" watchObservedRunningTime="2026-01-30 09:16:57.369333689 +0000 UTC m=+2822.341645240" Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.387148 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.388567 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.388699 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.389559 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.389729 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" gracePeriod=600 Jan 30 09:17:22 crc kubenswrapper[4758]: E0130 09:17:22.517749 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.597563 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" exitCode=0 Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.597608 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202"} Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.597668 4758 scope.go:117] "RemoveContainer" containerID="2cc215bdce1ca5af9f8c68a84b985e34df78173fce9f59d165b301292c9ca0d7" Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.598513 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:17:22 crc kubenswrapper[4758]: E0130 09:17:22.598902 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:17:32 crc kubenswrapper[4758]: I0130 09:17:32.769947 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:17:32 crc kubenswrapper[4758]: E0130 09:17:32.770703 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:17:45 crc kubenswrapper[4758]: I0130 09:17:45.775337 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:17:45 crc kubenswrapper[4758]: E0130 09:17:45.776095 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:17:58 crc kubenswrapper[4758]: I0130 09:17:58.769904 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:17:58 crc kubenswrapper[4758]: E0130 09:17:58.771022 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.310208 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-spvmr"] Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.313307 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.329859 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-spvmr"] Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.412368 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-catalog-content\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.412545 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r24x\" (UniqueName: \"kubernetes.io/projected/644832ef-4089-466f-9768-f30dfc4b6ef1-kube-api-access-5r24x\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.412594 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-utilities\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.513715 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-catalog-content\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.513864 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r24x\" (UniqueName: \"kubernetes.io/projected/644832ef-4089-466f-9768-f30dfc4b6ef1-kube-api-access-5r24x\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.513904 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-utilities\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.514242 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-utilities\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.514442 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-catalog-content\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.534821 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r24x\" (UniqueName: \"kubernetes.io/projected/644832ef-4089-466f-9768-f30dfc4b6ef1-kube-api-access-5r24x\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.645347 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:05 crc kubenswrapper[4758]: I0130 09:18:05.176852 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-spvmr"] Jan 30 09:18:05 crc kubenswrapper[4758]: I0130 09:18:05.976564 4758 generic.go:334] "Generic (PLEG): container finished" podID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerID="c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f" exitCode=0 Jan 30 09:18:05 crc kubenswrapper[4758]: I0130 09:18:05.976609 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spvmr" event={"ID":"644832ef-4089-466f-9768-f30dfc4b6ef1","Type":"ContainerDied","Data":"c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f"} Jan 30 09:18:05 crc kubenswrapper[4758]: I0130 09:18:05.976635 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spvmr" event={"ID":"644832ef-4089-466f-9768-f30dfc4b6ef1","Type":"ContainerStarted","Data":"1493bc131fe2a14d48215a5096bfd60d6949daca73eab63a5d511884ee5335cb"} Jan 30 09:18:06 crc kubenswrapper[4758]: I0130 09:18:06.986975 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spvmr" event={"ID":"644832ef-4089-466f-9768-f30dfc4b6ef1","Type":"ContainerStarted","Data":"7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11"} Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.086366 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s5xsj"] Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.088807 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.098855 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5xsj"] Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.189032 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nntvm\" (UniqueName: \"kubernetes.io/projected/e63975c5-3525-4e0b-9bd5-3dd93de36d38-kube-api-access-nntvm\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.189371 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-utilities\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.189418 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-catalog-content\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.291329 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nntvm\" (UniqueName: \"kubernetes.io/projected/e63975c5-3525-4e0b-9bd5-3dd93de36d38-kube-api-access-nntvm\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.291388 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-utilities\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.291440 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-catalog-content\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.292264 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-catalog-content\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.292356 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-utilities\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.315125 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nntvm\" (UniqueName: \"kubernetes.io/projected/e63975c5-3525-4e0b-9bd5-3dd93de36d38-kube-api-access-nntvm\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.407414 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.949848 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5xsj"] Jan 30 09:18:08 crc kubenswrapper[4758]: I0130 09:18:08.010451 4758 generic.go:334] "Generic (PLEG): container finished" podID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerID="7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11" exitCode=0 Jan 30 09:18:08 crc kubenswrapper[4758]: I0130 09:18:08.010570 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spvmr" event={"ID":"644832ef-4089-466f-9768-f30dfc4b6ef1","Type":"ContainerDied","Data":"7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11"} Jan 30 09:18:08 crc kubenswrapper[4758]: I0130 09:18:08.018969 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5xsj" event={"ID":"e63975c5-3525-4e0b-9bd5-3dd93de36d38","Type":"ContainerStarted","Data":"7cf806a302c6f18474a3bf8181f8ad7f7a77bc8704b82b5808c7dbe7f1566ffc"} Jan 30 09:18:09 crc kubenswrapper[4758]: I0130 09:18:09.029235 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spvmr" event={"ID":"644832ef-4089-466f-9768-f30dfc4b6ef1","Type":"ContainerStarted","Data":"30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5"} Jan 30 09:18:09 crc kubenswrapper[4758]: I0130 09:18:09.030972 4758 generic.go:334] "Generic (PLEG): container finished" podID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerID="041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032" exitCode=0 Jan 30 09:18:09 crc kubenswrapper[4758]: I0130 09:18:09.031016 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5xsj" event={"ID":"e63975c5-3525-4e0b-9bd5-3dd93de36d38","Type":"ContainerDied","Data":"041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032"} Jan 30 09:18:09 crc kubenswrapper[4758]: I0130 09:18:09.062017 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-spvmr" podStartSLOduration=2.507199319 podStartE2EDuration="5.061996929s" podCreationTimestamp="2026-01-30 09:18:04 +0000 UTC" firstStartedPulling="2026-01-30 09:18:05.978567485 +0000 UTC m=+2890.950879036" lastFinishedPulling="2026-01-30 09:18:08.533365095 +0000 UTC m=+2893.505676646" observedRunningTime="2026-01-30 09:18:09.058029175 +0000 UTC m=+2894.030340736" watchObservedRunningTime="2026-01-30 09:18:09.061996929 +0000 UTC m=+2894.034308480" Jan 30 09:18:10 crc kubenswrapper[4758]: I0130 09:18:10.041136 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5xsj" event={"ID":"e63975c5-3525-4e0b-9bd5-3dd93de36d38","Type":"ContainerStarted","Data":"2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b"} Jan 30 09:18:11 crc kubenswrapper[4758]: I0130 09:18:11.768919 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:18:11 crc kubenswrapper[4758]: E0130 09:18:11.769503 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:18:13 crc kubenswrapper[4758]: I0130 09:18:13.075265 4758 generic.go:334] "Generic (PLEG): container finished" podID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerID="2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b" exitCode=0 Jan 30 09:18:13 crc kubenswrapper[4758]: I0130 09:18:13.075335 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5xsj" event={"ID":"e63975c5-3525-4e0b-9bd5-3dd93de36d38","Type":"ContainerDied","Data":"2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b"} Jan 30 09:18:14 crc kubenswrapper[4758]: I0130 09:18:14.087208 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5xsj" event={"ID":"e63975c5-3525-4e0b-9bd5-3dd93de36d38","Type":"ContainerStarted","Data":"daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94"} Jan 30 09:18:14 crc kubenswrapper[4758]: I0130 09:18:14.108225 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s5xsj" podStartSLOduration=2.652365807 podStartE2EDuration="7.108189656s" podCreationTimestamp="2026-01-30 09:18:07 +0000 UTC" firstStartedPulling="2026-01-30 09:18:09.03246651 +0000 UTC m=+2894.004778061" lastFinishedPulling="2026-01-30 09:18:13.488290359 +0000 UTC m=+2898.460601910" observedRunningTime="2026-01-30 09:18:14.103120766 +0000 UTC m=+2899.075432337" watchObservedRunningTime="2026-01-30 09:18:14.108189656 +0000 UTC m=+2899.080501197" Jan 30 09:18:14 crc kubenswrapper[4758]: I0130 09:18:14.646834 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:14 crc kubenswrapper[4758]: I0130 09:18:14.646876 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:14 crc kubenswrapper[4758]: I0130 09:18:14.693776 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:15 crc kubenswrapper[4758]: I0130 09:18:15.188564 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.077853 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-spvmr"] Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.111185 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-spvmr" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerName="registry-server" containerID="cri-o://30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5" gracePeriod=2 Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.421767 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.422082 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.489423 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.549088 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.627430 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-utilities\") pod \"644832ef-4089-466f-9768-f30dfc4b6ef1\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.627555 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-catalog-content\") pod \"644832ef-4089-466f-9768-f30dfc4b6ef1\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.627698 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r24x\" (UniqueName: \"kubernetes.io/projected/644832ef-4089-466f-9768-f30dfc4b6ef1-kube-api-access-5r24x\") pod \"644832ef-4089-466f-9768-f30dfc4b6ef1\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.628878 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-utilities" (OuterVolumeSpecName: "utilities") pod "644832ef-4089-466f-9768-f30dfc4b6ef1" (UID: "644832ef-4089-466f-9768-f30dfc4b6ef1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.634317 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644832ef-4089-466f-9768-f30dfc4b6ef1-kube-api-access-5r24x" (OuterVolumeSpecName: "kube-api-access-5r24x") pod "644832ef-4089-466f-9768-f30dfc4b6ef1" (UID: "644832ef-4089-466f-9768-f30dfc4b6ef1"). InnerVolumeSpecName "kube-api-access-5r24x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.651633 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "644832ef-4089-466f-9768-f30dfc4b6ef1" (UID: "644832ef-4089-466f-9768-f30dfc4b6ef1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.729817 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.729859 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.729875 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r24x\" (UniqueName: \"kubernetes.io/projected/644832ef-4089-466f-9768-f30dfc4b6ef1-kube-api-access-5r24x\") on node \"crc\" DevicePath \"\"" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.122709 4758 generic.go:334] "Generic (PLEG): container finished" podID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerID="30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5" exitCode=0 Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.122800 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spvmr" event={"ID":"644832ef-4089-466f-9768-f30dfc4b6ef1","Type":"ContainerDied","Data":"30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5"} Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.122860 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spvmr" event={"ID":"644832ef-4089-466f-9768-f30dfc4b6ef1","Type":"ContainerDied","Data":"1493bc131fe2a14d48215a5096bfd60d6949daca73eab63a5d511884ee5335cb"} Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.122888 4758 scope.go:117] "RemoveContainer" containerID="30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.122812 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.151300 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-spvmr"] Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.162526 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-spvmr"] Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.176913 4758 scope.go:117] "RemoveContainer" containerID="7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.190581 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.216511 4758 scope.go:117] "RemoveContainer" containerID="c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.262652 4758 scope.go:117] "RemoveContainer" containerID="30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5" Jan 30 09:18:18 crc kubenswrapper[4758]: E0130 09:18:18.263204 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5\": container with ID starting with 30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5 not found: ID does not exist" containerID="30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.263239 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5"} err="failed to get container status \"30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5\": rpc error: code = NotFound desc = could not find container \"30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5\": container with ID starting with 30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5 not found: ID does not exist" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.263263 4758 scope.go:117] "RemoveContainer" containerID="7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11" Jan 30 09:18:18 crc kubenswrapper[4758]: E0130 09:18:18.263480 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11\": container with ID starting with 7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11 not found: ID does not exist" containerID="7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.263501 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11"} err="failed to get container status \"7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11\": rpc error: code = NotFound desc = could not find container \"7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11\": container with ID starting with 7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11 not found: ID does not exist" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.263532 4758 scope.go:117] "RemoveContainer" containerID="c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f" Jan 30 09:18:18 crc kubenswrapper[4758]: E0130 09:18:18.263725 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f\": container with ID starting with c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f not found: ID does not exist" containerID="c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.263747 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f"} err="failed to get container status \"c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f\": rpc error: code = NotFound desc = could not find container \"c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f\": container with ID starting with c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f not found: ID does not exist" Jan 30 09:18:19 crc kubenswrapper[4758]: I0130 09:18:19.779663 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" path="/var/lib/kubelet/pods/644832ef-4089-466f-9768-f30dfc4b6ef1/volumes" Jan 30 09:18:19 crc kubenswrapper[4758]: I0130 09:18:19.874153 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s5xsj"] Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.138047 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s5xsj" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerName="registry-server" containerID="cri-o://daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94" gracePeriod=2 Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.570967 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.683270 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-catalog-content\") pod \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.683695 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-utilities\") pod \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.683873 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nntvm\" (UniqueName: \"kubernetes.io/projected/e63975c5-3525-4e0b-9bd5-3dd93de36d38-kube-api-access-nntvm\") pod \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.685963 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-utilities" (OuterVolumeSpecName: "utilities") pod "e63975c5-3525-4e0b-9bd5-3dd93de36d38" (UID: "e63975c5-3525-4e0b-9bd5-3dd93de36d38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.690156 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e63975c5-3525-4e0b-9bd5-3dd93de36d38-kube-api-access-nntvm" (OuterVolumeSpecName: "kube-api-access-nntvm") pod "e63975c5-3525-4e0b-9bd5-3dd93de36d38" (UID: "e63975c5-3525-4e0b-9bd5-3dd93de36d38"). InnerVolumeSpecName "kube-api-access-nntvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.738103 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e63975c5-3525-4e0b-9bd5-3dd93de36d38" (UID: "e63975c5-3525-4e0b-9bd5-3dd93de36d38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.786371 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.786415 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.786428 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nntvm\" (UniqueName: \"kubernetes.io/projected/e63975c5-3525-4e0b-9bd5-3dd93de36d38-kube-api-access-nntvm\") on node \"crc\" DevicePath \"\"" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.149398 4758 generic.go:334] "Generic (PLEG): container finished" podID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerID="daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94" exitCode=0 Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.149452 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.149471 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5xsj" event={"ID":"e63975c5-3525-4e0b-9bd5-3dd93de36d38","Type":"ContainerDied","Data":"daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94"} Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.149842 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5xsj" event={"ID":"e63975c5-3525-4e0b-9bd5-3dd93de36d38","Type":"ContainerDied","Data":"7cf806a302c6f18474a3bf8181f8ad7f7a77bc8704b82b5808c7dbe7f1566ffc"} Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.149866 4758 scope.go:117] "RemoveContainer" containerID="daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.167289 4758 scope.go:117] "RemoveContainer" containerID="2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.202873 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s5xsj"] Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.207611 4758 scope.go:117] "RemoveContainer" containerID="041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.214014 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s5xsj"] Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.246168 4758 scope.go:117] "RemoveContainer" containerID="daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94" Jan 30 09:18:21 crc kubenswrapper[4758]: E0130 09:18:21.246660 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94\": container with ID starting with daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94 not found: ID does not exist" containerID="daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.246739 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94"} err="failed to get container status \"daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94\": rpc error: code = NotFound desc = could not find container \"daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94\": container with ID starting with daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94 not found: ID does not exist" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.246773 4758 scope.go:117] "RemoveContainer" containerID="2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b" Jan 30 09:18:21 crc kubenswrapper[4758]: E0130 09:18:21.247356 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b\": container with ID starting with 2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b not found: ID does not exist" containerID="2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.247379 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b"} err="failed to get container status \"2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b\": rpc error: code = NotFound desc = could not find container \"2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b\": container with ID starting with 2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b not found: ID does not exist" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.247395 4758 scope.go:117] "RemoveContainer" containerID="041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032" Jan 30 09:18:21 crc kubenswrapper[4758]: E0130 09:18:21.247810 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032\": container with ID starting with 041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032 not found: ID does not exist" containerID="041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.247834 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032"} err="failed to get container status \"041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032\": rpc error: code = NotFound desc = could not find container \"041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032\": container with ID starting with 041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032 not found: ID does not exist" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.778226 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" path="/var/lib/kubelet/pods/e63975c5-3525-4e0b-9bd5-3dd93de36d38/volumes" Jan 30 09:18:25 crc kubenswrapper[4758]: I0130 09:18:25.783526 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:18:25 crc kubenswrapper[4758]: E0130 09:18:25.784799 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:18:38 crc kubenswrapper[4758]: I0130 09:18:38.769098 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:18:38 crc kubenswrapper[4758]: E0130 09:18:38.769749 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:18:52 crc kubenswrapper[4758]: I0130 09:18:52.769049 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:18:52 crc kubenswrapper[4758]: E0130 09:18:52.769716 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:19:04 crc kubenswrapper[4758]: I0130 09:19:04.768906 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:19:04 crc kubenswrapper[4758]: E0130 09:19:04.769706 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:19:17 crc kubenswrapper[4758]: I0130 09:19:17.768240 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:19:17 crc kubenswrapper[4758]: E0130 09:19:17.768959 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:19:31 crc kubenswrapper[4758]: I0130 09:19:31.769296 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:19:31 crc kubenswrapper[4758]: E0130 09:19:31.770205 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:19:42 crc kubenswrapper[4758]: I0130 09:19:42.768566 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:19:42 crc kubenswrapper[4758]: E0130 09:19:42.769333 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:19:56 crc kubenswrapper[4758]: I0130 09:19:56.768444 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:19:56 crc kubenswrapper[4758]: E0130 09:19:56.769213 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:20:08 crc kubenswrapper[4758]: I0130 09:20:08.768573 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:20:08 crc kubenswrapper[4758]: E0130 09:20:08.769299 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:20:15 crc kubenswrapper[4758]: I0130 09:20:15.076693 4758 generic.go:334] "Generic (PLEG): container finished" podID="38932896-a566-4440-b672-33909cb638b0" containerID="eb7433698d4689365258f066619568b60512753789e2f4616d12bb0530207c37" exitCode=0 Jan 30 09:20:15 crc kubenswrapper[4758]: I0130 09:20:15.076784 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" event={"ID":"38932896-a566-4440-b672-33909cb638b0","Type":"ContainerDied","Data":"eb7433698d4689365258f066619568b60512753789e2f4616d12bb0530207c37"} Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.481874 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.604260 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-inventory\") pod \"38932896-a566-4440-b672-33909cb638b0\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.604386 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-2\") pod \"38932896-a566-4440-b672-33909cb638b0\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.604497 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ssh-key-openstack-edpm-ipam\") pod \"38932896-a566-4440-b672-33909cb638b0\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.604565 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-telemetry-combined-ca-bundle\") pod \"38932896-a566-4440-b672-33909cb638b0\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.604692 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdvbl\" (UniqueName: \"kubernetes.io/projected/38932896-a566-4440-b672-33909cb638b0-kube-api-access-qdvbl\") pod \"38932896-a566-4440-b672-33909cb638b0\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.604732 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-1\") pod \"38932896-a566-4440-b672-33909cb638b0\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.604816 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-0\") pod \"38932896-a566-4440-b672-33909cb638b0\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.610547 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38932896-a566-4440-b672-33909cb638b0-kube-api-access-qdvbl" (OuterVolumeSpecName: "kube-api-access-qdvbl") pod "38932896-a566-4440-b672-33909cb638b0" (UID: "38932896-a566-4440-b672-33909cb638b0"). InnerVolumeSpecName "kube-api-access-qdvbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.619750 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "38932896-a566-4440-b672-33909cb638b0" (UID: "38932896-a566-4440-b672-33909cb638b0"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.632211 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "38932896-a566-4440-b672-33909cb638b0" (UID: "38932896-a566-4440-b672-33909cb638b0"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.634308 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "38932896-a566-4440-b672-33909cb638b0" (UID: "38932896-a566-4440-b672-33909cb638b0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.634949 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-inventory" (OuterVolumeSpecName: "inventory") pod "38932896-a566-4440-b672-33909cb638b0" (UID: "38932896-a566-4440-b672-33909cb638b0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.638800 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "38932896-a566-4440-b672-33909cb638b0" (UID: "38932896-a566-4440-b672-33909cb638b0"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.640408 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "38932896-a566-4440-b672-33909cb638b0" (UID: "38932896-a566-4440-b672-33909cb638b0"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.707621 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.707655 4758 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.707665 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdvbl\" (UniqueName: \"kubernetes.io/projected/38932896-a566-4440-b672-33909cb638b0-kube-api-access-qdvbl\") on node \"crc\" DevicePath \"\"" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.707674 4758 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.707684 4758 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.707719 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.707731 4758 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 30 09:20:17 crc kubenswrapper[4758]: I0130 09:20:17.096418 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" event={"ID":"38932896-a566-4440-b672-33909cb638b0","Type":"ContainerDied","Data":"4ad968b66da815adc412e79bec231c9a8b88fe94c19e206b409cd68c1ca8ab01"} Jan 30 09:20:17 crc kubenswrapper[4758]: I0130 09:20:17.096738 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ad968b66da815adc412e79bec231c9a8b88fe94c19e206b409cd68c1ca8ab01" Jan 30 09:20:17 crc kubenswrapper[4758]: I0130 09:20:17.096483 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:20:23 crc kubenswrapper[4758]: I0130 09:20:23.768642 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:20:23 crc kubenswrapper[4758]: E0130 09:20:23.769157 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:20:36 crc kubenswrapper[4758]: I0130 09:20:36.769245 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:20:36 crc kubenswrapper[4758]: E0130 09:20:36.770216 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:20:48 crc kubenswrapper[4758]: I0130 09:20:48.769118 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:20:48 crc kubenswrapper[4758]: E0130 09:20:48.769723 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.594700 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 09:20:58 crc kubenswrapper[4758]: E0130 09:20:58.596828 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerName="extract-utilities" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.596936 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerName="extract-utilities" Jan 30 09:20:58 crc kubenswrapper[4758]: E0130 09:20:58.596998 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerName="extract-content" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.597073 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerName="extract-content" Jan 30 09:20:58 crc kubenswrapper[4758]: E0130 09:20:58.597149 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerName="registry-server" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.597209 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerName="registry-server" Jan 30 09:20:58 crc kubenswrapper[4758]: E0130 09:20:58.597272 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerName="extract-content" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.597324 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerName="extract-content" Jan 30 09:20:58 crc kubenswrapper[4758]: E0130 09:20:58.597406 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38932896-a566-4440-b672-33909cb638b0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.597465 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="38932896-a566-4440-b672-33909cb638b0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 09:20:58 crc kubenswrapper[4758]: E0130 09:20:58.597535 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerName="extract-utilities" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.597602 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerName="extract-utilities" Jan 30 09:20:58 crc kubenswrapper[4758]: E0130 09:20:58.597666 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerName="registry-server" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.597733 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerName="registry-server" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.598012 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="38932896-a566-4440-b672-33909cb638b0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.598110 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerName="registry-server" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.598173 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerName="registry-server" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.598827 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.601182 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bnbxz" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.601419 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.601574 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.603014 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.607669 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.700948 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.701022 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdxsh\" (UniqueName: \"kubernetes.io/projected/110e1168-332c-4165-bd6e-47419c571681-kube-api-access-kdxsh\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.701207 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.701309 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.701388 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.701646 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.701760 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.701802 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-config-data\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.701846 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.803857 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.803921 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.803941 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-config-data\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.803980 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.804028 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.804087 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdxsh\" (UniqueName: \"kubernetes.io/projected/110e1168-332c-4165-bd6e-47419c571681-kube-api-access-kdxsh\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.804133 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.804164 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.804193 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.804672 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.805104 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.805249 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.805463 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-config-data\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.805915 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.811112 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.811338 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.814087 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.825897 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdxsh\" (UniqueName: \"kubernetes.io/projected/110e1168-332c-4165-bd6e-47419c571681-kube-api-access-kdxsh\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.835407 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.940975 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 09:20:59 crc kubenswrapper[4758]: I0130 09:20:59.368027 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 09:20:59 crc kubenswrapper[4758]: I0130 09:20:59.372288 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:20:59 crc kubenswrapper[4758]: I0130 09:20:59.452904 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"110e1168-332c-4165-bd6e-47419c571681","Type":"ContainerStarted","Data":"333f254addae44025e00aa879c29d6e6ca58b2430bd9e1d7e1eba46b32166b13"} Jan 30 09:21:00 crc kubenswrapper[4758]: I0130 09:21:00.768633 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:21:00 crc kubenswrapper[4758]: E0130 09:21:00.769155 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:21:15 crc kubenswrapper[4758]: I0130 09:21:15.777893 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:21:15 crc kubenswrapper[4758]: E0130 09:21:15.779364 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:21:27 crc kubenswrapper[4758]: I0130 09:21:27.771248 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:21:27 crc kubenswrapper[4758]: E0130 09:21:27.772025 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:21:33 crc kubenswrapper[4758]: E0130 09:21:33.100656 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 30 09:21:33 crc kubenswrapper[4758]: E0130 09:21:33.112209 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdxsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(110e1168-332c-4165-bd6e-47419c571681): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 09:21:33 crc kubenswrapper[4758]: E0130 09:21:33.113703 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="110e1168-332c-4165-bd6e-47419c571681" Jan 30 09:21:33 crc kubenswrapper[4758]: E0130 09:21:33.757234 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="110e1168-332c-4165-bd6e-47419c571681" Jan 30 09:21:40 crc kubenswrapper[4758]: I0130 09:21:40.769101 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:21:40 crc kubenswrapper[4758]: E0130 09:21:40.769814 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:21:48 crc kubenswrapper[4758]: I0130 09:21:48.891737 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"110e1168-332c-4165-bd6e-47419c571681","Type":"ContainerStarted","Data":"9585d72b864ad0afa161d8615e2e581e1849cfc14f137d53455eed18ce1d77db"} Jan 30 09:21:48 crc kubenswrapper[4758]: I0130 09:21:48.910996 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.075146353 podStartE2EDuration="51.910784209s" podCreationTimestamp="2026-01-30 09:20:57 +0000 UTC" firstStartedPulling="2026-01-30 09:20:59.372003016 +0000 UTC m=+3064.344314567" lastFinishedPulling="2026-01-30 09:21:47.207640872 +0000 UTC m=+3112.179952423" observedRunningTime="2026-01-30 09:21:48.906840585 +0000 UTC m=+3113.879152136" watchObservedRunningTime="2026-01-30 09:21:48.910784209 +0000 UTC m=+3113.883095760" Jan 30 09:21:53 crc kubenswrapper[4758]: I0130 09:21:53.768579 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:21:53 crc kubenswrapper[4758]: E0130 09:21:53.769374 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:22:05 crc kubenswrapper[4758]: I0130 09:22:05.777992 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:22:05 crc kubenswrapper[4758]: E0130 09:22:05.779132 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:22:17 crc kubenswrapper[4758]: I0130 09:22:17.768866 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:22:17 crc kubenswrapper[4758]: E0130 09:22:17.769629 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:22:30 crc kubenswrapper[4758]: I0130 09:22:30.768943 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:22:31 crc kubenswrapper[4758]: I0130 09:22:31.259823 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"cb507a492aec7fadcf39a2125e28a32759b1a8e40688e90ab277c9b1c21ac13b"} Jan 30 09:24:52 crc kubenswrapper[4758]: I0130 09:24:52.387816 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:24:52 crc kubenswrapper[4758]: I0130 09:24:52.388350 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:25:02 crc kubenswrapper[4758]: I0130 09:25:02.572821 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="fd2d2fe7-5dac-4f3b-80f6-650712925495" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 09:25:02 crc kubenswrapper[4758]: I0130 09:25:02.572809 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="fd2d2fe7-5dac-4f3b-80f6-650712925495" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 09:25:02 crc kubenswrapper[4758]: I0130 09:25:02.572809 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-api-0" podUID="fd2d2fe7-5dac-4f3b-80f6-650712925495" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 09:25:02 crc kubenswrapper[4758]: I0130 09:25:02.572812 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-api-0" podUID="fd2d2fe7-5dac-4f3b-80f6-650712925495" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 09:25:16 crc kubenswrapper[4758]: I0130 09:25:16.857653 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4fkv7"] Jan 30 09:25:16 crc kubenswrapper[4758]: I0130 09:25:16.860381 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:16 crc kubenswrapper[4758]: I0130 09:25:16.901884 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvw26\" (UniqueName: \"kubernetes.io/projected/6796d9ff-9d00-4d02-9bab-b5782c937e9c-kube-api-access-xvw26\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:16 crc kubenswrapper[4758]: I0130 09:25:16.902300 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-catalog-content\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:16 crc kubenswrapper[4758]: I0130 09:25:16.902730 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-utilities\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:16 crc kubenswrapper[4758]: I0130 09:25:16.944813 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4fkv7"] Jan 30 09:25:17 crc kubenswrapper[4758]: I0130 09:25:17.003652 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvw26\" (UniqueName: \"kubernetes.io/projected/6796d9ff-9d00-4d02-9bab-b5782c937e9c-kube-api-access-xvw26\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:17 crc kubenswrapper[4758]: I0130 09:25:17.003709 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-catalog-content\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:17 crc kubenswrapper[4758]: I0130 09:25:17.003807 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-utilities\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:17 crc kubenswrapper[4758]: I0130 09:25:17.004380 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-catalog-content\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:17 crc kubenswrapper[4758]: I0130 09:25:17.004394 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-utilities\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:17 crc kubenswrapper[4758]: I0130 09:25:17.024582 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvw26\" (UniqueName: \"kubernetes.io/projected/6796d9ff-9d00-4d02-9bab-b5782c937e9c-kube-api-access-xvw26\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:17 crc kubenswrapper[4758]: I0130 09:25:17.185449 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:18 crc kubenswrapper[4758]: I0130 09:25:18.509268 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4fkv7"] Jan 30 09:25:18 crc kubenswrapper[4758]: W0130 09:25:18.514465 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6796d9ff_9d00_4d02_9bab_b5782c937e9c.slice/crio-b8e54a068799832547885c617e04ec61b45bb92e0212dfdc2af54909d1236107 WatchSource:0}: Error finding container b8e54a068799832547885c617e04ec61b45bb92e0212dfdc2af54909d1236107: Status 404 returned error can't find the container with id b8e54a068799832547885c617e04ec61b45bb92e0212dfdc2af54909d1236107 Jan 30 09:25:18 crc kubenswrapper[4758]: I0130 09:25:18.599933 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fkv7" event={"ID":"6796d9ff-9d00-4d02-9bab-b5782c937e9c","Type":"ContainerStarted","Data":"b8e54a068799832547885c617e04ec61b45bb92e0212dfdc2af54909d1236107"} Jan 30 09:25:19 crc kubenswrapper[4758]: I0130 09:25:19.612646 4758 generic.go:334] "Generic (PLEG): container finished" podID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerID="7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b" exitCode=0 Jan 30 09:25:19 crc kubenswrapper[4758]: I0130 09:25:19.612710 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fkv7" event={"ID":"6796d9ff-9d00-4d02-9bab-b5782c937e9c","Type":"ContainerDied","Data":"7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b"} Jan 30 09:25:20 crc kubenswrapper[4758]: I0130 09:25:20.626615 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fkv7" event={"ID":"6796d9ff-9d00-4d02-9bab-b5782c937e9c","Type":"ContainerStarted","Data":"b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab"} Jan 30 09:25:22 crc kubenswrapper[4758]: I0130 09:25:22.387522 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:25:22 crc kubenswrapper[4758]: I0130 09:25:22.387921 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:25:22 crc kubenswrapper[4758]: I0130 09:25:22.643641 4758 generic.go:334] "Generic (PLEG): container finished" podID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerID="b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab" exitCode=0 Jan 30 09:25:22 crc kubenswrapper[4758]: I0130 09:25:22.643693 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fkv7" event={"ID":"6796d9ff-9d00-4d02-9bab-b5782c937e9c","Type":"ContainerDied","Data":"b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab"} Jan 30 09:25:23 crc kubenswrapper[4758]: I0130 09:25:23.654830 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fkv7" event={"ID":"6796d9ff-9d00-4d02-9bab-b5782c937e9c","Type":"ContainerStarted","Data":"7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b"} Jan 30 09:25:23 crc kubenswrapper[4758]: I0130 09:25:23.679276 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4fkv7" podStartSLOduration=4.27346879 podStartE2EDuration="7.679255982s" podCreationTimestamp="2026-01-30 09:25:16 +0000 UTC" firstStartedPulling="2026-01-30 09:25:19.616462527 +0000 UTC m=+3324.588774078" lastFinishedPulling="2026-01-30 09:25:23.022249719 +0000 UTC m=+3327.994561270" observedRunningTime="2026-01-30 09:25:23.677727115 +0000 UTC m=+3328.650038686" watchObservedRunningTime="2026-01-30 09:25:23.679255982 +0000 UTC m=+3328.651567533" Jan 30 09:25:27 crc kubenswrapper[4758]: I0130 09:25:27.186225 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:27 crc kubenswrapper[4758]: I0130 09:25:27.186860 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:28 crc kubenswrapper[4758]: I0130 09:25:28.241375 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4fkv7" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="registry-server" probeResult="failure" output=< Jan 30 09:25:28 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:25:28 crc kubenswrapper[4758]: > Jan 30 09:25:37 crc kubenswrapper[4758]: I0130 09:25:37.245303 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:37 crc kubenswrapper[4758]: I0130 09:25:37.296543 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:37 crc kubenswrapper[4758]: I0130 09:25:37.484824 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4fkv7"] Jan 30 09:25:38 crc kubenswrapper[4758]: I0130 09:25:38.781953 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4fkv7" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="registry-server" containerID="cri-o://7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b" gracePeriod=2 Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.644182 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.801944 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvw26\" (UniqueName: \"kubernetes.io/projected/6796d9ff-9d00-4d02-9bab-b5782c937e9c-kube-api-access-xvw26\") pod \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.802139 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-catalog-content\") pod \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.802211 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-utilities\") pod \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.803678 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-utilities" (OuterVolumeSpecName: "utilities") pod "6796d9ff-9d00-4d02-9bab-b5782c937e9c" (UID: "6796d9ff-9d00-4d02-9bab-b5782c937e9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.806477 4758 generic.go:334] "Generic (PLEG): container finished" podID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerID="7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b" exitCode=0 Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.806534 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fkv7" event={"ID":"6796d9ff-9d00-4d02-9bab-b5782c937e9c","Type":"ContainerDied","Data":"7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b"} Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.806564 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fkv7" event={"ID":"6796d9ff-9d00-4d02-9bab-b5782c937e9c","Type":"ContainerDied","Data":"b8e54a068799832547885c617e04ec61b45bb92e0212dfdc2af54909d1236107"} Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.806596 4758 scope.go:117] "RemoveContainer" containerID="7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.806695 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.819953 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6796d9ff-9d00-4d02-9bab-b5782c937e9c-kube-api-access-xvw26" (OuterVolumeSpecName: "kube-api-access-xvw26") pod "6796d9ff-9d00-4d02-9bab-b5782c937e9c" (UID: "6796d9ff-9d00-4d02-9bab-b5782c937e9c"). InnerVolumeSpecName "kube-api-access-xvw26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.864428 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6796d9ff-9d00-4d02-9bab-b5782c937e9c" (UID: "6796d9ff-9d00-4d02-9bab-b5782c937e9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.886421 4758 scope.go:117] "RemoveContainer" containerID="b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.906599 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.906623 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.906634 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvw26\" (UniqueName: \"kubernetes.io/projected/6796d9ff-9d00-4d02-9bab-b5782c937e9c-kube-api-access-xvw26\") on node \"crc\" DevicePath \"\"" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.910237 4758 scope.go:117] "RemoveContainer" containerID="7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.948200 4758 scope.go:117] "RemoveContainer" containerID="7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b" Jan 30 09:25:39 crc kubenswrapper[4758]: E0130 09:25:39.948594 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b\": container with ID starting with 7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b not found: ID does not exist" containerID="7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.949644 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b"} err="failed to get container status \"7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b\": rpc error: code = NotFound desc = could not find container \"7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b\": container with ID starting with 7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b not found: ID does not exist" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.949749 4758 scope.go:117] "RemoveContainer" containerID="b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab" Jan 30 09:25:39 crc kubenswrapper[4758]: E0130 09:25:39.950198 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab\": container with ID starting with b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab not found: ID does not exist" containerID="b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.950255 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab"} err="failed to get container status \"b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab\": rpc error: code = NotFound desc = could not find container \"b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab\": container with ID starting with b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab not found: ID does not exist" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.950290 4758 scope.go:117] "RemoveContainer" containerID="7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b" Jan 30 09:25:39 crc kubenswrapper[4758]: E0130 09:25:39.950659 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b\": container with ID starting with 7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b not found: ID does not exist" containerID="7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.950695 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b"} err="failed to get container status \"7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b\": rpc error: code = NotFound desc = could not find container \"7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b\": container with ID starting with 7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b not found: ID does not exist" Jan 30 09:25:40 crc kubenswrapper[4758]: I0130 09:25:40.143903 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4fkv7"] Jan 30 09:25:40 crc kubenswrapper[4758]: I0130 09:25:40.151980 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4fkv7"] Jan 30 09:25:41 crc kubenswrapper[4758]: I0130 09:25:41.780754 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" path="/var/lib/kubelet/pods/6796d9ff-9d00-4d02-9bab-b5782c937e9c/volumes" Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.387565 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.388160 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.388210 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.388956 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb507a492aec7fadcf39a2125e28a32759b1a8e40688e90ab277c9b1c21ac13b"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.389008 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://cb507a492aec7fadcf39a2125e28a32759b1a8e40688e90ab277c9b1c21ac13b" gracePeriod=600 Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.916733 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="cb507a492aec7fadcf39a2125e28a32759b1a8e40688e90ab277c9b1c21ac13b" exitCode=0 Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.916778 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"cb507a492aec7fadcf39a2125e28a32759b1a8e40688e90ab277c9b1c21ac13b"} Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.917186 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1"} Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.917218 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:27:52 crc kubenswrapper[4758]: I0130 09:27:52.387699 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:27:52 crc kubenswrapper[4758]: I0130 09:27:52.388194 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.027171 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f674k"] Jan 30 09:28:15 crc kubenswrapper[4758]: E0130 09:28:15.034969 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="registry-server" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.035006 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="registry-server" Jan 30 09:28:15 crc kubenswrapper[4758]: E0130 09:28:15.035031 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="extract-utilities" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.035052 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="extract-utilities" Jan 30 09:28:15 crc kubenswrapper[4758]: E0130 09:28:15.035075 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="extract-content" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.035082 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="extract-content" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.035265 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="registry-server" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.036640 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.037493 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f674k"] Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.106622 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfrn9\" (UniqueName: \"kubernetes.io/projected/68c558e8-a714-430a-884f-794eb739486b-kube-api-access-nfrn9\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.106691 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-utilities\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.106814 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-catalog-content\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.208303 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-catalog-content\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.208474 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfrn9\" (UniqueName: \"kubernetes.io/projected/68c558e8-a714-430a-884f-794eb739486b-kube-api-access-nfrn9\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.208512 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-utilities\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.209111 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-utilities\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.209169 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-catalog-content\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.229791 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfrn9\" (UniqueName: \"kubernetes.io/projected/68c558e8-a714-430a-884f-794eb739486b-kube-api-access-nfrn9\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.359731 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:16 crc kubenswrapper[4758]: I0130 09:28:16.059820 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f674k"] Jan 30 09:28:16 crc kubenswrapper[4758]: W0130 09:28:16.087530 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68c558e8_a714_430a_884f_794eb739486b.slice/crio-8cc8b0eb3d73f3a13e471a9a05d23f16a1aec69389d5064224635640089f6446 WatchSource:0}: Error finding container 8cc8b0eb3d73f3a13e471a9a05d23f16a1aec69389d5064224635640089f6446: Status 404 returned error can't find the container with id 8cc8b0eb3d73f3a13e471a9a05d23f16a1aec69389d5064224635640089f6446 Jan 30 09:28:16 crc kubenswrapper[4758]: I0130 09:28:16.187127 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f674k" event={"ID":"68c558e8-a714-430a-884f-794eb739486b","Type":"ContainerStarted","Data":"8cc8b0eb3d73f3a13e471a9a05d23f16a1aec69389d5064224635640089f6446"} Jan 30 09:28:17 crc kubenswrapper[4758]: I0130 09:28:17.212225 4758 generic.go:334] "Generic (PLEG): container finished" podID="68c558e8-a714-430a-884f-794eb739486b" containerID="da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79" exitCode=0 Jan 30 09:28:17 crc kubenswrapper[4758]: I0130 09:28:17.213399 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f674k" event={"ID":"68c558e8-a714-430a-884f-794eb739486b","Type":"ContainerDied","Data":"da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79"} Jan 30 09:28:17 crc kubenswrapper[4758]: I0130 09:28:17.215416 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:28:18 crc kubenswrapper[4758]: I0130 09:28:18.224276 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f674k" event={"ID":"68c558e8-a714-430a-884f-794eb739486b","Type":"ContainerStarted","Data":"5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db"} Jan 30 09:28:20 crc kubenswrapper[4758]: I0130 09:28:20.243769 4758 generic.go:334] "Generic (PLEG): container finished" podID="68c558e8-a714-430a-884f-794eb739486b" containerID="5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db" exitCode=0 Jan 30 09:28:20 crc kubenswrapper[4758]: I0130 09:28:20.243841 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f674k" event={"ID":"68c558e8-a714-430a-884f-794eb739486b","Type":"ContainerDied","Data":"5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db"} Jan 30 09:28:21 crc kubenswrapper[4758]: I0130 09:28:21.258984 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f674k" event={"ID":"68c558e8-a714-430a-884f-794eb739486b","Type":"ContainerStarted","Data":"a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f"} Jan 30 09:28:22 crc kubenswrapper[4758]: I0130 09:28:22.387492 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:28:22 crc kubenswrapper[4758]: I0130 09:28:22.387874 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:28:25 crc kubenswrapper[4758]: I0130 09:28:25.359957 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:25 crc kubenswrapper[4758]: I0130 09:28:25.360321 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:26 crc kubenswrapper[4758]: I0130 09:28:26.409004 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-f674k" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="registry-server" probeResult="failure" output=< Jan 30 09:28:26 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:28:26 crc kubenswrapper[4758]: > Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.667260 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f674k" podStartSLOduration=10.131771322 podStartE2EDuration="13.667236972s" podCreationTimestamp="2026-01-30 09:28:14 +0000 UTC" firstStartedPulling="2026-01-30 09:28:17.215160384 +0000 UTC m=+3502.187471935" lastFinishedPulling="2026-01-30 09:28:20.750626034 +0000 UTC m=+3505.722937585" observedRunningTime="2026-01-30 09:28:21.283721199 +0000 UTC m=+3506.256032750" watchObservedRunningTime="2026-01-30 09:28:27.667236972 +0000 UTC m=+3512.639548523" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.676726 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rdw24"] Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.679568 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.690399 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdw24"] Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.772520 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-catalog-content\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.772593 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6slc5\" (UniqueName: \"kubernetes.io/projected/adc6c17c-d7f2-4f48-86a1-74d2013747d0-kube-api-access-6slc5\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.772953 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-utilities\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.874978 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-utilities\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.875116 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-catalog-content\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.875168 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6slc5\" (UniqueName: \"kubernetes.io/projected/adc6c17c-d7f2-4f48-86a1-74d2013747d0-kube-api-access-6slc5\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.875872 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-utilities\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.875887 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-catalog-content\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.895791 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6slc5\" (UniqueName: \"kubernetes.io/projected/adc6c17c-d7f2-4f48-86a1-74d2013747d0-kube-api-access-6slc5\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:28 crc kubenswrapper[4758]: I0130 09:28:28.000803 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:28 crc kubenswrapper[4758]: I0130 09:28:28.525926 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdw24"] Jan 30 09:28:28 crc kubenswrapper[4758]: W0130 09:28:28.551192 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc6c17c_d7f2_4f48_86a1_74d2013747d0.slice/crio-95efbe5521f1c49e3512a155a89429784b1e25da062825e4d0e26020ec044cf6 WatchSource:0}: Error finding container 95efbe5521f1c49e3512a155a89429784b1e25da062825e4d0e26020ec044cf6: Status 404 returned error can't find the container with id 95efbe5521f1c49e3512a155a89429784b1e25da062825e4d0e26020ec044cf6 Jan 30 09:28:29 crc kubenswrapper[4758]: I0130 09:28:29.336518 4758 generic.go:334] "Generic (PLEG): container finished" podID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerID="0ef8472d48cd7ea49c109f42f514daca323b939fbce39db440a086c70484a1ce" exitCode=0 Jan 30 09:28:29 crc kubenswrapper[4758]: I0130 09:28:29.336563 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdw24" event={"ID":"adc6c17c-d7f2-4f48-86a1-74d2013747d0","Type":"ContainerDied","Data":"0ef8472d48cd7ea49c109f42f514daca323b939fbce39db440a086c70484a1ce"} Jan 30 09:28:29 crc kubenswrapper[4758]: I0130 09:28:29.337950 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdw24" event={"ID":"adc6c17c-d7f2-4f48-86a1-74d2013747d0","Type":"ContainerStarted","Data":"95efbe5521f1c49e3512a155a89429784b1e25da062825e4d0e26020ec044cf6"} Jan 30 09:28:30 crc kubenswrapper[4758]: I0130 09:28:30.347526 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdw24" event={"ID":"adc6c17c-d7f2-4f48-86a1-74d2013747d0","Type":"ContainerStarted","Data":"d8b83d753c66a1a5a7a5af61e0d4a9366f21f197dfab723618542a9e67991d00"} Jan 30 09:28:31 crc kubenswrapper[4758]: I0130 09:28:31.358263 4758 generic.go:334] "Generic (PLEG): container finished" podID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerID="d8b83d753c66a1a5a7a5af61e0d4a9366f21f197dfab723618542a9e67991d00" exitCode=0 Jan 30 09:28:31 crc kubenswrapper[4758]: I0130 09:28:31.358353 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdw24" event={"ID":"adc6c17c-d7f2-4f48-86a1-74d2013747d0","Type":"ContainerDied","Data":"d8b83d753c66a1a5a7a5af61e0d4a9366f21f197dfab723618542a9e67991d00"} Jan 30 09:28:32 crc kubenswrapper[4758]: I0130 09:28:32.369282 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdw24" event={"ID":"adc6c17c-d7f2-4f48-86a1-74d2013747d0","Type":"ContainerStarted","Data":"a3b38454efa0d468a8586bdc7dc3b21138a28c4f2f35e0fe9b0d3f1ea3f3694d"} Jan 30 09:28:32 crc kubenswrapper[4758]: I0130 09:28:32.389511 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rdw24" podStartSLOduration=2.998105518 podStartE2EDuration="5.389479242s" podCreationTimestamp="2026-01-30 09:28:27 +0000 UTC" firstStartedPulling="2026-01-30 09:28:29.33952968 +0000 UTC m=+3514.311841231" lastFinishedPulling="2026-01-30 09:28:31.730903404 +0000 UTC m=+3516.703214955" observedRunningTime="2026-01-30 09:28:32.387100698 +0000 UTC m=+3517.359412269" watchObservedRunningTime="2026-01-30 09:28:32.389479242 +0000 UTC m=+3517.361790793" Jan 30 09:28:35 crc kubenswrapper[4758]: I0130 09:28:35.461500 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:35 crc kubenswrapper[4758]: I0130 09:28:35.627125 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:36 crc kubenswrapper[4758]: I0130 09:28:36.046157 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f674k"] Jan 30 09:28:37 crc kubenswrapper[4758]: I0130 09:28:37.423188 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f674k" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="registry-server" containerID="cri-o://a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f" gracePeriod=2 Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.000883 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.002709 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.089309 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.184694 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfrn9\" (UniqueName: \"kubernetes.io/projected/68c558e8-a714-430a-884f-794eb739486b-kube-api-access-nfrn9\") pod \"68c558e8-a714-430a-884f-794eb739486b\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.184903 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-catalog-content\") pod \"68c558e8-a714-430a-884f-794eb739486b\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.185060 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-utilities\") pod \"68c558e8-a714-430a-884f-794eb739486b\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.186883 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-utilities" (OuterVolumeSpecName: "utilities") pod "68c558e8-a714-430a-884f-794eb739486b" (UID: "68c558e8-a714-430a-884f-794eb739486b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.206341 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68c558e8-a714-430a-884f-794eb739486b-kube-api-access-nfrn9" (OuterVolumeSpecName: "kube-api-access-nfrn9") pod "68c558e8-a714-430a-884f-794eb739486b" (UID: "68c558e8-a714-430a-884f-794eb739486b"). InnerVolumeSpecName "kube-api-access-nfrn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.256305 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68c558e8-a714-430a-884f-794eb739486b" (UID: "68c558e8-a714-430a-884f-794eb739486b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.287188 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.287224 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.287234 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfrn9\" (UniqueName: \"kubernetes.io/projected/68c558e8-a714-430a-884f-794eb739486b-kube-api-access-nfrn9\") on node \"crc\" DevicePath \"\"" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.433777 4758 generic.go:334] "Generic (PLEG): container finished" podID="68c558e8-a714-430a-884f-794eb739486b" containerID="a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f" exitCode=0 Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.434930 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.439170 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f674k" event={"ID":"68c558e8-a714-430a-884f-794eb739486b","Type":"ContainerDied","Data":"a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f"} Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.439216 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f674k" event={"ID":"68c558e8-a714-430a-884f-794eb739486b","Type":"ContainerDied","Data":"8cc8b0eb3d73f3a13e471a9a05d23f16a1aec69389d5064224635640089f6446"} Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.439233 4758 scope.go:117] "RemoveContainer" containerID="a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.468636 4758 scope.go:117] "RemoveContainer" containerID="5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.475514 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f674k"] Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.488222 4758 scope.go:117] "RemoveContainer" containerID="da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.489314 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f674k"] Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.529970 4758 scope.go:117] "RemoveContainer" containerID="a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f" Jan 30 09:28:38 crc kubenswrapper[4758]: E0130 09:28:38.530534 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f\": container with ID starting with a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f not found: ID does not exist" containerID="a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.530581 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f"} err="failed to get container status \"a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f\": rpc error: code = NotFound desc = could not find container \"a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f\": container with ID starting with a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f not found: ID does not exist" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.530609 4758 scope.go:117] "RemoveContainer" containerID="5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db" Jan 30 09:28:38 crc kubenswrapper[4758]: E0130 09:28:38.530877 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db\": container with ID starting with 5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db not found: ID does not exist" containerID="5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.530929 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db"} err="failed to get container status \"5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db\": rpc error: code = NotFound desc = could not find container \"5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db\": container with ID starting with 5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db not found: ID does not exist" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.530946 4758 scope.go:117] "RemoveContainer" containerID="da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79" Jan 30 09:28:38 crc kubenswrapper[4758]: E0130 09:28:38.531297 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79\": container with ID starting with da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79 not found: ID does not exist" containerID="da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.531325 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79"} err="failed to get container status \"da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79\": rpc error: code = NotFound desc = could not find container \"da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79\": container with ID starting with da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79 not found: ID does not exist" Jan 30 09:28:39 crc kubenswrapper[4758]: I0130 09:28:39.052093 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rdw24" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="registry-server" probeResult="failure" output=< Jan 30 09:28:39 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:28:39 crc kubenswrapper[4758]: > Jan 30 09:28:39 crc kubenswrapper[4758]: I0130 09:28:39.779093 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68c558e8-a714-430a-884f-794eb739486b" path="/var/lib/kubelet/pods/68c558e8-a714-430a-884f-794eb739486b/volumes" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.421348 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g4zt4"] Jan 30 09:28:41 crc kubenswrapper[4758]: E0130 09:28:41.422052 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="extract-utilities" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.422067 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="extract-utilities" Jan 30 09:28:41 crc kubenswrapper[4758]: E0130 09:28:41.422086 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="registry-server" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.422092 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="registry-server" Jan 30 09:28:41 crc kubenswrapper[4758]: E0130 09:28:41.422114 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="extract-content" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.422121 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="extract-content" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.422300 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="registry-server" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.424818 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.441425 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4zt4"] Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.544993 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-utilities\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.545050 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ww6s\" (UniqueName: \"kubernetes.io/projected/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-kube-api-access-7ww6s\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.545155 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-catalog-content\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.646404 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-utilities\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.646454 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ww6s\" (UniqueName: \"kubernetes.io/projected/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-kube-api-access-7ww6s\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.646575 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-catalog-content\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.646957 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-utilities\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.647196 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-catalog-content\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.667539 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ww6s\" (UniqueName: \"kubernetes.io/projected/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-kube-api-access-7ww6s\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.769601 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:42 crc kubenswrapper[4758]: I0130 09:28:42.326191 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4zt4"] Jan 30 09:28:42 crc kubenswrapper[4758]: I0130 09:28:42.492148 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4zt4" event={"ID":"99afc7c0-1f67-4dcd-93f3-6fec892cdb11","Type":"ContainerStarted","Data":"b80cb638ffc2d4c1d2a9212d574aa63f341273f37e50fb903af0a0b72da45f1c"} Jan 30 09:28:43 crc kubenswrapper[4758]: I0130 09:28:43.502287 4758 generic.go:334] "Generic (PLEG): container finished" podID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerID="bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987" exitCode=0 Jan 30 09:28:43 crc kubenswrapper[4758]: I0130 09:28:43.502519 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4zt4" event={"ID":"99afc7c0-1f67-4dcd-93f3-6fec892cdb11","Type":"ContainerDied","Data":"bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987"} Jan 30 09:28:45 crc kubenswrapper[4758]: I0130 09:28:45.520519 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4zt4" event={"ID":"99afc7c0-1f67-4dcd-93f3-6fec892cdb11","Type":"ContainerStarted","Data":"13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463"} Jan 30 09:28:48 crc kubenswrapper[4758]: I0130 09:28:48.048268 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:48 crc kubenswrapper[4758]: I0130 09:28:48.098154 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:49 crc kubenswrapper[4758]: I0130 09:28:49.047799 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdw24"] Jan 30 09:28:49 crc kubenswrapper[4758]: I0130 09:28:49.549833 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rdw24" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="registry-server" containerID="cri-o://a3b38454efa0d468a8586bdc7dc3b21138a28c4f2f35e0fe9b0d3f1ea3f3694d" gracePeriod=2 Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.561183 4758 generic.go:334] "Generic (PLEG): container finished" podID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerID="a3b38454efa0d468a8586bdc7dc3b21138a28c4f2f35e0fe9b0d3f1ea3f3694d" exitCode=0 Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.561247 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdw24" event={"ID":"adc6c17c-d7f2-4f48-86a1-74d2013747d0","Type":"ContainerDied","Data":"a3b38454efa0d468a8586bdc7dc3b21138a28c4f2f35e0fe9b0d3f1ea3f3694d"} Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.671837 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.720475 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-catalog-content\") pod \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.721144 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6slc5\" (UniqueName: \"kubernetes.io/projected/adc6c17c-d7f2-4f48-86a1-74d2013747d0-kube-api-access-6slc5\") pod \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.721299 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-utilities\") pod \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.725066 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-utilities" (OuterVolumeSpecName: "utilities") pod "adc6c17c-d7f2-4f48-86a1-74d2013747d0" (UID: "adc6c17c-d7f2-4f48-86a1-74d2013747d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.727611 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adc6c17c-d7f2-4f48-86a1-74d2013747d0-kube-api-access-6slc5" (OuterVolumeSpecName: "kube-api-access-6slc5") pod "adc6c17c-d7f2-4f48-86a1-74d2013747d0" (UID: "adc6c17c-d7f2-4f48-86a1-74d2013747d0"). InnerVolumeSpecName "kube-api-access-6slc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.736512 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adc6c17c-d7f2-4f48-86a1-74d2013747d0" (UID: "adc6c17c-d7f2-4f48-86a1-74d2013747d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.823702 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.824022 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6slc5\" (UniqueName: \"kubernetes.io/projected/adc6c17c-d7f2-4f48-86a1-74d2013747d0-kube-api-access-6slc5\") on node \"crc\" DevicePath \"\"" Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.824300 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:28:51 crc kubenswrapper[4758]: I0130 09:28:51.580207 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdw24" event={"ID":"adc6c17c-d7f2-4f48-86a1-74d2013747d0","Type":"ContainerDied","Data":"95efbe5521f1c49e3512a155a89429784b1e25da062825e4d0e26020ec044cf6"} Jan 30 09:28:51 crc kubenswrapper[4758]: I0130 09:28:51.580546 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:51 crc kubenswrapper[4758]: I0130 09:28:51.580577 4758 scope.go:117] "RemoveContainer" containerID="a3b38454efa0d468a8586bdc7dc3b21138a28c4f2f35e0fe9b0d3f1ea3f3694d" Jan 30 09:28:51 crc kubenswrapper[4758]: I0130 09:28:51.604715 4758 scope.go:117] "RemoveContainer" containerID="d8b83d753c66a1a5a7a5af61e0d4a9366f21f197dfab723618542a9e67991d00" Jan 30 09:28:51 crc kubenswrapper[4758]: I0130 09:28:51.623060 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdw24"] Jan 30 09:28:51 crc kubenswrapper[4758]: I0130 09:28:51.630481 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdw24"] Jan 30 09:28:51 crc kubenswrapper[4758]: I0130 09:28:51.654742 4758 scope.go:117] "RemoveContainer" containerID="0ef8472d48cd7ea49c109f42f514daca323b939fbce39db440a086c70484a1ce" Jan 30 09:28:51 crc kubenswrapper[4758]: I0130 09:28:51.778371 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" path="/var/lib/kubelet/pods/adc6c17c-d7f2-4f48-86a1-74d2013747d0/volumes" Jan 30 09:28:52 crc kubenswrapper[4758]: I0130 09:28:52.387537 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:28:52 crc kubenswrapper[4758]: I0130 09:28:52.387589 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:28:52 crc kubenswrapper[4758]: I0130 09:28:52.387744 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:28:52 crc kubenswrapper[4758]: I0130 09:28:52.388415 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:28:52 crc kubenswrapper[4758]: I0130 09:28:52.388469 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" gracePeriod=600 Jan 30 09:28:52 crc kubenswrapper[4758]: I0130 09:28:52.591416 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" exitCode=0 Jan 30 09:28:52 crc kubenswrapper[4758]: I0130 09:28:52.591477 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1"} Jan 30 09:28:52 crc kubenswrapper[4758]: I0130 09:28:52.591511 4758 scope.go:117] "RemoveContainer" containerID="cb507a492aec7fadcf39a2125e28a32759b1a8e40688e90ab277c9b1c21ac13b" Jan 30 09:28:53 crc kubenswrapper[4758]: E0130 09:28:53.014830 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:28:53 crc kubenswrapper[4758]: I0130 09:28:53.605555 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:28:53 crc kubenswrapper[4758]: E0130 09:28:53.605896 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:28:53 crc kubenswrapper[4758]: I0130 09:28:53.606688 4758 generic.go:334] "Generic (PLEG): container finished" podID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerID="13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463" exitCode=0 Jan 30 09:28:53 crc kubenswrapper[4758]: I0130 09:28:53.606743 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4zt4" event={"ID":"99afc7c0-1f67-4dcd-93f3-6fec892cdb11","Type":"ContainerDied","Data":"13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463"} Jan 30 09:28:54 crc kubenswrapper[4758]: I0130 09:28:54.618951 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4zt4" event={"ID":"99afc7c0-1f67-4dcd-93f3-6fec892cdb11","Type":"ContainerStarted","Data":"8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9"} Jan 30 09:28:54 crc kubenswrapper[4758]: I0130 09:28:54.642973 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g4zt4" podStartSLOduration=3.039037715 podStartE2EDuration="13.642952757s" podCreationTimestamp="2026-01-30 09:28:41 +0000 UTC" firstStartedPulling="2026-01-30 09:28:43.507629011 +0000 UTC m=+3528.479940562" lastFinishedPulling="2026-01-30 09:28:54.111544053 +0000 UTC m=+3539.083855604" observedRunningTime="2026-01-30 09:28:54.639947332 +0000 UTC m=+3539.612258893" watchObservedRunningTime="2026-01-30 09:28:54.642952757 +0000 UTC m=+3539.615264318" Jan 30 09:29:01 crc kubenswrapper[4758]: I0130 09:29:01.780171 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:29:01 crc kubenswrapper[4758]: I0130 09:29:01.780753 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:29:02 crc kubenswrapper[4758]: I0130 09:29:02.821603 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g4zt4" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="registry-server" probeResult="failure" output=< Jan 30 09:29:02 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:29:02 crc kubenswrapper[4758]: > Jan 30 09:29:07 crc kubenswrapper[4758]: I0130 09:29:07.770584 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:29:07 crc kubenswrapper[4758]: E0130 09:29:07.771342 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:29:12 crc kubenswrapper[4758]: I0130 09:29:12.825403 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g4zt4" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="registry-server" probeResult="failure" output=< Jan 30 09:29:12 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:29:12 crc kubenswrapper[4758]: > Jan 30 09:29:19 crc kubenswrapper[4758]: I0130 09:29:19.769470 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:29:19 crc kubenswrapper[4758]: E0130 09:29:19.770083 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:29:22 crc kubenswrapper[4758]: I0130 09:29:22.824323 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g4zt4" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="registry-server" probeResult="failure" output=< Jan 30 09:29:22 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:29:22 crc kubenswrapper[4758]: > Jan 30 09:29:31 crc kubenswrapper[4758]: I0130 09:29:31.768710 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:29:31 crc kubenswrapper[4758]: E0130 09:29:31.769734 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:29:32 crc kubenswrapper[4758]: I0130 09:29:32.817351 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g4zt4" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="registry-server" probeResult="failure" output=< Jan 30 09:29:32 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:29:32 crc kubenswrapper[4758]: > Jan 30 09:29:41 crc kubenswrapper[4758]: I0130 09:29:41.819717 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:29:41 crc kubenswrapper[4758]: I0130 09:29:41.882433 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:29:42 crc kubenswrapper[4758]: I0130 09:29:42.644980 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4zt4"] Jan 30 09:29:43 crc kubenswrapper[4758]: I0130 09:29:43.023314 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g4zt4" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="registry-server" containerID="cri-o://8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9" gracePeriod=2 Jan 30 09:29:43 crc kubenswrapper[4758]: I0130 09:29:43.740521 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:29:43 crc kubenswrapper[4758]: I0130 09:29:43.912912 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ww6s\" (UniqueName: \"kubernetes.io/projected/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-kube-api-access-7ww6s\") pod \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " Jan 30 09:29:43 crc kubenswrapper[4758]: I0130 09:29:43.913467 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-utilities\") pod \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " Jan 30 09:29:43 crc kubenswrapper[4758]: I0130 09:29:43.913783 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-catalog-content\") pod \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " Jan 30 09:29:43 crc kubenswrapper[4758]: I0130 09:29:43.914078 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-utilities" (OuterVolumeSpecName: "utilities") pod "99afc7c0-1f67-4dcd-93f3-6fec892cdb11" (UID: "99afc7c0-1f67-4dcd-93f3-6fec892cdb11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:29:43 crc kubenswrapper[4758]: I0130 09:29:43.917802 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:29:43 crc kubenswrapper[4758]: I0130 09:29:43.935587 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-kube-api-access-7ww6s" (OuterVolumeSpecName: "kube-api-access-7ww6s") pod "99afc7c0-1f67-4dcd-93f3-6fec892cdb11" (UID: "99afc7c0-1f67-4dcd-93f3-6fec892cdb11"). InnerVolumeSpecName "kube-api-access-7ww6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.020064 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ww6s\" (UniqueName: \"kubernetes.io/projected/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-kube-api-access-7ww6s\") on node \"crc\" DevicePath \"\"" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.033659 4758 generic.go:334] "Generic (PLEG): container finished" podID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerID="8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9" exitCode=0 Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.033735 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.033746 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4zt4" event={"ID":"99afc7c0-1f67-4dcd-93f3-6fec892cdb11","Type":"ContainerDied","Data":"8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9"} Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.034912 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4zt4" event={"ID":"99afc7c0-1f67-4dcd-93f3-6fec892cdb11","Type":"ContainerDied","Data":"b80cb638ffc2d4c1d2a9212d574aa63f341273f37e50fb903af0a0b72da45f1c"} Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.034944 4758 scope.go:117] "RemoveContainer" containerID="8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.043463 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99afc7c0-1f67-4dcd-93f3-6fec892cdb11" (UID: "99afc7c0-1f67-4dcd-93f3-6fec892cdb11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.055007 4758 scope.go:117] "RemoveContainer" containerID="13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.078558 4758 scope.go:117] "RemoveContainer" containerID="bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.121977 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.123780 4758 scope.go:117] "RemoveContainer" containerID="8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9" Jan 30 09:29:44 crc kubenswrapper[4758]: E0130 09:29:44.124973 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9\": container with ID starting with 8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9 not found: ID does not exist" containerID="8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.125031 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9"} err="failed to get container status \"8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9\": rpc error: code = NotFound desc = could not find container \"8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9\": container with ID starting with 8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9 not found: ID does not exist" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.125087 4758 scope.go:117] "RemoveContainer" containerID="13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463" Jan 30 09:29:44 crc kubenswrapper[4758]: E0130 09:29:44.126472 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463\": container with ID starting with 13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463 not found: ID does not exist" containerID="13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.126501 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463"} err="failed to get container status \"13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463\": rpc error: code = NotFound desc = could not find container \"13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463\": container with ID starting with 13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463 not found: ID does not exist" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.126521 4758 scope.go:117] "RemoveContainer" containerID="bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987" Jan 30 09:29:44 crc kubenswrapper[4758]: E0130 09:29:44.127065 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987\": container with ID starting with bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987 not found: ID does not exist" containerID="bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.127102 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987"} err="failed to get container status \"bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987\": rpc error: code = NotFound desc = could not find container \"bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987\": container with ID starting with bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987 not found: ID does not exist" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.367214 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4zt4"] Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.373482 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g4zt4"] Jan 30 09:29:45 crc kubenswrapper[4758]: I0130 09:29:45.775402 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:29:45 crc kubenswrapper[4758]: E0130 09:29:45.775882 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:29:45 crc kubenswrapper[4758]: I0130 09:29:45.780949 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" path="/var/lib/kubelet/pods/99afc7c0-1f67-4dcd-93f3-6fec892cdb11/volumes" Jan 30 09:29:56 crc kubenswrapper[4758]: I0130 09:29:56.768369 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:29:56 crc kubenswrapper[4758]: E0130 09:29:56.768954 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.175586 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5"] Jan 30 09:30:00 crc kubenswrapper[4758]: E0130 09:30:00.176359 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="extract-utilities" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.176376 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="extract-utilities" Jan 30 09:30:00 crc kubenswrapper[4758]: E0130 09:30:00.176397 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="registry-server" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.176405 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="registry-server" Jan 30 09:30:00 crc kubenswrapper[4758]: E0130 09:30:00.176420 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="registry-server" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.176430 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="registry-server" Jan 30 09:30:00 crc kubenswrapper[4758]: E0130 09:30:00.176450 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="extract-utilities" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.176459 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="extract-utilities" Jan 30 09:30:00 crc kubenswrapper[4758]: E0130 09:30:00.176473 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="extract-content" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.176481 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="extract-content" Jan 30 09:30:00 crc kubenswrapper[4758]: E0130 09:30:00.176499 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="extract-content" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.176506 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="extract-content" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.176719 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="registry-server" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.176753 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="registry-server" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.177513 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.192661 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5"] Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.195474 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.196144 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.237704 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7eb6a254-a4f7-410f-a4fa-018580518f25-config-volume\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.238211 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7eb6a254-a4f7-410f-a4fa-018580518f25-secret-volume\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.238270 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttgnf\" (UniqueName: \"kubernetes.io/projected/7eb6a254-a4f7-410f-a4fa-018580518f25-kube-api-access-ttgnf\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.339963 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7eb6a254-a4f7-410f-a4fa-018580518f25-secret-volume\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.340023 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttgnf\" (UniqueName: \"kubernetes.io/projected/7eb6a254-a4f7-410f-a4fa-018580518f25-kube-api-access-ttgnf\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.340080 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7eb6a254-a4f7-410f-a4fa-018580518f25-config-volume\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.341331 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7eb6a254-a4f7-410f-a4fa-018580518f25-config-volume\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.349998 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7eb6a254-a4f7-410f-a4fa-018580518f25-secret-volume\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.361077 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttgnf\" (UniqueName: \"kubernetes.io/projected/7eb6a254-a4f7-410f-a4fa-018580518f25-kube-api-access-ttgnf\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.510536 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:01 crc kubenswrapper[4758]: I0130 09:30:01.064430 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5"] Jan 30 09:30:01 crc kubenswrapper[4758]: W0130 09:30:01.070467 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eb6a254_a4f7_410f_a4fa_018580518f25.slice/crio-5c86f238ef52545f49dfe2a30f2f4e6d74f73e71178b54660a89bf28d3d5b23e WatchSource:0}: Error finding container 5c86f238ef52545f49dfe2a30f2f4e6d74f73e71178b54660a89bf28d3d5b23e: Status 404 returned error can't find the container with id 5c86f238ef52545f49dfe2a30f2f4e6d74f73e71178b54660a89bf28d3d5b23e Jan 30 09:30:01 crc kubenswrapper[4758]: I0130 09:30:01.191760 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" event={"ID":"7eb6a254-a4f7-410f-a4fa-018580518f25","Type":"ContainerStarted","Data":"5c86f238ef52545f49dfe2a30f2f4e6d74f73e71178b54660a89bf28d3d5b23e"} Jan 30 09:30:02 crc kubenswrapper[4758]: I0130 09:30:02.201614 4758 generic.go:334] "Generic (PLEG): container finished" podID="7eb6a254-a4f7-410f-a4fa-018580518f25" containerID="a3f6ac664b00b1f5b33d8b6858ec6fe6a87aa7b1e4e6ff49e0d26c386269eb16" exitCode=0 Jan 30 09:30:02 crc kubenswrapper[4758]: I0130 09:30:02.201966 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" event={"ID":"7eb6a254-a4f7-410f-a4fa-018580518f25","Type":"ContainerDied","Data":"a3f6ac664b00b1f5b33d8b6858ec6fe6a87aa7b1e4e6ff49e0d26c386269eb16"} Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.685486 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.824838 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7eb6a254-a4f7-410f-a4fa-018580518f25-secret-volume\") pod \"7eb6a254-a4f7-410f-a4fa-018580518f25\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.824977 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7eb6a254-a4f7-410f-a4fa-018580518f25-config-volume\") pod \"7eb6a254-a4f7-410f-a4fa-018580518f25\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.825086 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttgnf\" (UniqueName: \"kubernetes.io/projected/7eb6a254-a4f7-410f-a4fa-018580518f25-kube-api-access-ttgnf\") pod \"7eb6a254-a4f7-410f-a4fa-018580518f25\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.828247 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eb6a254-a4f7-410f-a4fa-018580518f25-config-volume" (OuterVolumeSpecName: "config-volume") pod "7eb6a254-a4f7-410f-a4fa-018580518f25" (UID: "7eb6a254-a4f7-410f-a4fa-018580518f25"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.847568 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb6a254-a4f7-410f-a4fa-018580518f25-kube-api-access-ttgnf" (OuterVolumeSpecName: "kube-api-access-ttgnf") pod "7eb6a254-a4f7-410f-a4fa-018580518f25" (UID: "7eb6a254-a4f7-410f-a4fa-018580518f25"). InnerVolumeSpecName "kube-api-access-ttgnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.858628 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb6a254-a4f7-410f-a4fa-018580518f25-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7eb6a254-a4f7-410f-a4fa-018580518f25" (UID: "7eb6a254-a4f7-410f-a4fa-018580518f25"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.927848 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7eb6a254-a4f7-410f-a4fa-018580518f25-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.927879 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7eb6a254-a4f7-410f-a4fa-018580518f25-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.928150 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttgnf\" (UniqueName: \"kubernetes.io/projected/7eb6a254-a4f7-410f-a4fa-018580518f25-kube-api-access-ttgnf\") on node \"crc\" DevicePath \"\"" Jan 30 09:30:04 crc kubenswrapper[4758]: I0130 09:30:04.220267 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" event={"ID":"7eb6a254-a4f7-410f-a4fa-018580518f25","Type":"ContainerDied","Data":"5c86f238ef52545f49dfe2a30f2f4e6d74f73e71178b54660a89bf28d3d5b23e"} Jan 30 09:30:04 crc kubenswrapper[4758]: I0130 09:30:04.220313 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c86f238ef52545f49dfe2a30f2f4e6d74f73e71178b54660a89bf28d3d5b23e" Jan 30 09:30:04 crc kubenswrapper[4758]: I0130 09:30:04.220338 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:04 crc kubenswrapper[4758]: I0130 09:30:04.788998 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87"] Jan 30 09:30:04 crc kubenswrapper[4758]: I0130 09:30:04.796891 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87"] Jan 30 09:30:05 crc kubenswrapper[4758]: I0130 09:30:05.782402 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76da59ec-7916-4d09-8154-61e9848aaec6" path="/var/lib/kubelet/pods/76da59ec-7916-4d09-8154-61e9848aaec6/volumes" Jan 30 09:30:08 crc kubenswrapper[4758]: I0130 09:30:08.768791 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:30:08 crc kubenswrapper[4758]: E0130 09:30:08.769497 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:30:10 crc kubenswrapper[4758]: I0130 09:30:10.644278 4758 scope.go:117] "RemoveContainer" containerID="60e0a5bbfedb1bfda46d2d92720410ff62ddecd2757f775d7598cb6c8cd22199" Jan 30 09:30:20 crc kubenswrapper[4758]: I0130 09:30:20.768772 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:30:20 crc kubenswrapper[4758]: E0130 09:30:20.769596 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:30:31 crc kubenswrapper[4758]: I0130 09:30:31.768208 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:30:31 crc kubenswrapper[4758]: E0130 09:30:31.769025 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:30:44 crc kubenswrapper[4758]: I0130 09:30:44.768494 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:30:44 crc kubenswrapper[4758]: E0130 09:30:44.769237 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:30:57 crc kubenswrapper[4758]: I0130 09:30:57.769532 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:30:57 crc kubenswrapper[4758]: E0130 09:30:57.770907 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:31:09 crc kubenswrapper[4758]: I0130 09:31:09.768956 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:31:09 crc kubenswrapper[4758]: E0130 09:31:09.769773 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:31:20 crc kubenswrapper[4758]: I0130 09:31:20.769289 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:31:20 crc kubenswrapper[4758]: E0130 09:31:20.770068 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:31:34 crc kubenswrapper[4758]: I0130 09:31:34.768412 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:31:34 crc kubenswrapper[4758]: E0130 09:31:34.769083 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:31:45 crc kubenswrapper[4758]: I0130 09:31:45.776848 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:31:45 crc kubenswrapper[4758]: E0130 09:31:45.778058 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:31:57 crc kubenswrapper[4758]: I0130 09:31:57.768998 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:31:57 crc kubenswrapper[4758]: E0130 09:31:57.771253 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:32:09 crc kubenswrapper[4758]: I0130 09:32:09.768828 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:32:09 crc kubenswrapper[4758]: E0130 09:32:09.769640 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:32:23 crc kubenswrapper[4758]: I0130 09:32:23.769139 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:32:23 crc kubenswrapper[4758]: E0130 09:32:23.769992 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:32:37 crc kubenswrapper[4758]: I0130 09:32:37.769517 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:32:37 crc kubenswrapper[4758]: E0130 09:32:37.770484 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:32:48 crc kubenswrapper[4758]: I0130 09:32:48.769663 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:32:48 crc kubenswrapper[4758]: E0130 09:32:48.770471 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:33:03 crc kubenswrapper[4758]: I0130 09:33:03.769591 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:33:03 crc kubenswrapper[4758]: E0130 09:33:03.770434 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:33:18 crc kubenswrapper[4758]: I0130 09:33:18.769445 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:33:18 crc kubenswrapper[4758]: E0130 09:33:18.770772 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:33:31 crc kubenswrapper[4758]: I0130 09:33:31.770457 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:33:31 crc kubenswrapper[4758]: E0130 09:33:31.772578 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:33:43 crc kubenswrapper[4758]: I0130 09:33:43.773093 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:33:43 crc kubenswrapper[4758]: E0130 09:33:43.773935 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:33:58 crc kubenswrapper[4758]: I0130 09:33:58.768454 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:33:59 crc kubenswrapper[4758]: I0130 09:33:59.264458 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"f621d5061ef980f9a3d7bdb05f6303bc701eff4159f79eca90a822b5aba48643"} Jan 30 09:35:23 crc kubenswrapper[4758]: I0130 09:35:23.216353 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-75f5775999-fhl5h" podUID="c2358e5c-db98-4b7b-8b6c-2e83132655a9" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.430870 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x4m4x"] Jan 30 09:35:58 crc kubenswrapper[4758]: E0130 09:35:58.437171 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb6a254-a4f7-410f-a4fa-018580518f25" containerName="collect-profiles" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.438570 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb6a254-a4f7-410f-a4fa-018580518f25" containerName="collect-profiles" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.439080 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb6a254-a4f7-410f-a4fa-018580518f25" containerName="collect-profiles" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.441076 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.451655 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x4m4x"] Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.515476 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-utilities\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.515556 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjgx6\" (UniqueName: \"kubernetes.io/projected/debfcde3-b34c-4223-aaf0-c39aa0f5f015-kube-api-access-zjgx6\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.515605 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-catalog-content\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.616945 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-utilities\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.616997 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjgx6\" (UniqueName: \"kubernetes.io/projected/debfcde3-b34c-4223-aaf0-c39aa0f5f015-kube-api-access-zjgx6\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.617029 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-catalog-content\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.617515 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-catalog-content\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.622599 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-utilities\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.647544 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjgx6\" (UniqueName: \"kubernetes.io/projected/debfcde3-b34c-4223-aaf0-c39aa0f5f015-kube-api-access-zjgx6\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.817488 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:59 crc kubenswrapper[4758]: I0130 09:35:59.433332 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x4m4x"] Jan 30 09:36:00 crc kubenswrapper[4758]: I0130 09:36:00.296956 4758 generic.go:334] "Generic (PLEG): container finished" podID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerID="366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052" exitCode=0 Jan 30 09:36:00 crc kubenswrapper[4758]: I0130 09:36:00.297013 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4m4x" event={"ID":"debfcde3-b34c-4223-aaf0-c39aa0f5f015","Type":"ContainerDied","Data":"366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052"} Jan 30 09:36:00 crc kubenswrapper[4758]: I0130 09:36:00.297085 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4m4x" event={"ID":"debfcde3-b34c-4223-aaf0-c39aa0f5f015","Type":"ContainerStarted","Data":"209a026d27a75bc017328d882a6ebbaff9f5b2b309f32daabefef59f4a7e5def"} Jan 30 09:36:00 crc kubenswrapper[4758]: I0130 09:36:00.299085 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:36:02 crc kubenswrapper[4758]: I0130 09:36:02.318586 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4m4x" event={"ID":"debfcde3-b34c-4223-aaf0-c39aa0f5f015","Type":"ContainerStarted","Data":"6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde"} Jan 30 09:36:03 crc kubenswrapper[4758]: I0130 09:36:03.328357 4758 generic.go:334] "Generic (PLEG): container finished" podID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerID="6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde" exitCode=0 Jan 30 09:36:03 crc kubenswrapper[4758]: I0130 09:36:03.328446 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4m4x" event={"ID":"debfcde3-b34c-4223-aaf0-c39aa0f5f015","Type":"ContainerDied","Data":"6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde"} Jan 30 09:36:04 crc kubenswrapper[4758]: I0130 09:36:04.338958 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4m4x" event={"ID":"debfcde3-b34c-4223-aaf0-c39aa0f5f015","Type":"ContainerStarted","Data":"4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d"} Jan 30 09:36:04 crc kubenswrapper[4758]: I0130 09:36:04.378953 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x4m4x" podStartSLOduration=2.926966293 podStartE2EDuration="6.378927429s" podCreationTimestamp="2026-01-30 09:35:58 +0000 UTC" firstStartedPulling="2026-01-30 09:36:00.298779778 +0000 UTC m=+3965.271091329" lastFinishedPulling="2026-01-30 09:36:03.750740914 +0000 UTC m=+3968.723052465" observedRunningTime="2026-01-30 09:36:04.366443276 +0000 UTC m=+3969.338754837" watchObservedRunningTime="2026-01-30 09:36:04.378927429 +0000 UTC m=+3969.351238980" Jan 30 09:36:08 crc kubenswrapper[4758]: I0130 09:36:08.818313 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:36:08 crc kubenswrapper[4758]: I0130 09:36:08.818786 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:36:08 crc kubenswrapper[4758]: I0130 09:36:08.867903 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:36:09 crc kubenswrapper[4758]: I0130 09:36:09.430512 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:36:09 crc kubenswrapper[4758]: I0130 09:36:09.500032 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x4m4x"] Jan 30 09:36:11 crc kubenswrapper[4758]: I0130 09:36:11.396718 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x4m4x" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerName="registry-server" containerID="cri-o://4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d" gracePeriod=2 Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.203174 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.279734 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-utilities\") pod \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.279954 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjgx6\" (UniqueName: \"kubernetes.io/projected/debfcde3-b34c-4223-aaf0-c39aa0f5f015-kube-api-access-zjgx6\") pod \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.280167 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-catalog-content\") pod \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.280707 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-utilities" (OuterVolumeSpecName: "utilities") pod "debfcde3-b34c-4223-aaf0-c39aa0f5f015" (UID: "debfcde3-b34c-4223-aaf0-c39aa0f5f015"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.286312 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/debfcde3-b34c-4223-aaf0-c39aa0f5f015-kube-api-access-zjgx6" (OuterVolumeSpecName: "kube-api-access-zjgx6") pod "debfcde3-b34c-4223-aaf0-c39aa0f5f015" (UID: "debfcde3-b34c-4223-aaf0-c39aa0f5f015"). InnerVolumeSpecName "kube-api-access-zjgx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.346261 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "debfcde3-b34c-4223-aaf0-c39aa0f5f015" (UID: "debfcde3-b34c-4223-aaf0-c39aa0f5f015"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.383103 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjgx6\" (UniqueName: \"kubernetes.io/projected/debfcde3-b34c-4223-aaf0-c39aa0f5f015-kube-api-access-zjgx6\") on node \"crc\" DevicePath \"\"" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.383143 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.383155 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.409446 4758 generic.go:334] "Generic (PLEG): container finished" podID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerID="4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d" exitCode=0 Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.409526 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4m4x" event={"ID":"debfcde3-b34c-4223-aaf0-c39aa0f5f015","Type":"ContainerDied","Data":"4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d"} Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.409574 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4m4x" event={"ID":"debfcde3-b34c-4223-aaf0-c39aa0f5f015","Type":"ContainerDied","Data":"209a026d27a75bc017328d882a6ebbaff9f5b2b309f32daabefef59f4a7e5def"} Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.409578 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.409592 4758 scope.go:117] "RemoveContainer" containerID="4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.439233 4758 scope.go:117] "RemoveContainer" containerID="6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.448363 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x4m4x"] Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.458111 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x4m4x"] Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.468400 4758 scope.go:117] "RemoveContainer" containerID="366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.508384 4758 scope.go:117] "RemoveContainer" containerID="4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d" Jan 30 09:36:12 crc kubenswrapper[4758]: E0130 09:36:12.508795 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d\": container with ID starting with 4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d not found: ID does not exist" containerID="4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.508843 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d"} err="failed to get container status \"4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d\": rpc error: code = NotFound desc = could not find container \"4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d\": container with ID starting with 4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d not found: ID does not exist" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.508876 4758 scope.go:117] "RemoveContainer" containerID="6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde" Jan 30 09:36:12 crc kubenswrapper[4758]: E0130 09:36:12.509286 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde\": container with ID starting with 6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde not found: ID does not exist" containerID="6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.509324 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde"} err="failed to get container status \"6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde\": rpc error: code = NotFound desc = could not find container \"6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde\": container with ID starting with 6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde not found: ID does not exist" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.509349 4758 scope.go:117] "RemoveContainer" containerID="366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052" Jan 30 09:36:12 crc kubenswrapper[4758]: E0130 09:36:12.509684 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052\": container with ID starting with 366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052 not found: ID does not exist" containerID="366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.509706 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052"} err="failed to get container status \"366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052\": rpc error: code = NotFound desc = could not find container \"366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052\": container with ID starting with 366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052 not found: ID does not exist" Jan 30 09:36:13 crc kubenswrapper[4758]: I0130 09:36:13.779378 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" path="/var/lib/kubelet/pods/debfcde3-b34c-4223-aaf0-c39aa0f5f015/volumes" Jan 30 09:36:22 crc kubenswrapper[4758]: I0130 09:36:22.387426 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:36:22 crc kubenswrapper[4758]: I0130 09:36:22.388011 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:36:52 crc kubenswrapper[4758]: I0130 09:36:52.387119 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:36:52 crc kubenswrapper[4758]: I0130 09:36:52.388570 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:37:22 crc kubenswrapper[4758]: I0130 09:37:22.387586 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:37:22 crc kubenswrapper[4758]: I0130 09:37:22.388164 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:37:22 crc kubenswrapper[4758]: I0130 09:37:22.388222 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:37:22 crc kubenswrapper[4758]: I0130 09:37:22.389076 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f621d5061ef980f9a3d7bdb05f6303bc701eff4159f79eca90a822b5aba48643"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:37:22 crc kubenswrapper[4758]: I0130 09:37:22.389159 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://f621d5061ef980f9a3d7bdb05f6303bc701eff4159f79eca90a822b5aba48643" gracePeriod=600 Jan 30 09:37:23 crc kubenswrapper[4758]: I0130 09:37:23.006739 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="f621d5061ef980f9a3d7bdb05f6303bc701eff4159f79eca90a822b5aba48643" exitCode=0 Jan 30 09:37:23 crc kubenswrapper[4758]: I0130 09:37:23.007129 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"f621d5061ef980f9a3d7bdb05f6303bc701eff4159f79eca90a822b5aba48643"} Jan 30 09:37:23 crc kubenswrapper[4758]: I0130 09:37:23.007157 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32"} Jan 30 09:37:23 crc kubenswrapper[4758]: I0130 09:37:23.007174 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.647444 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9qdgq"] Jan 30 09:38:55 crc kubenswrapper[4758]: E0130 09:38:55.648361 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerName="extract-utilities" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.648376 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerName="extract-utilities" Jan 30 09:38:55 crc kubenswrapper[4758]: E0130 09:38:55.648391 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerName="registry-server" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.648397 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerName="registry-server" Jan 30 09:38:55 crc kubenswrapper[4758]: E0130 09:38:55.648426 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerName="extract-content" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.648432 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerName="extract-content" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.648615 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerName="registry-server" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.650051 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.659986 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qdgq"] Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.786243 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-catalog-content\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.786441 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-utilities\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.786464 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lvwk\" (UniqueName: \"kubernetes.io/projected/8cb0f705-65da-4841-a48b-0dd8e895ee79-kube-api-access-4lvwk\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.888175 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-utilities\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.888521 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lvwk\" (UniqueName: \"kubernetes.io/projected/8cb0f705-65da-4841-a48b-0dd8e895ee79-kube-api-access-4lvwk\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.888583 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-catalog-content\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.888743 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-utilities\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.889064 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-catalog-content\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.922902 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lvwk\" (UniqueName: \"kubernetes.io/projected/8cb0f705-65da-4841-a48b-0dd8e895ee79-kube-api-access-4lvwk\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.966956 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:56 crc kubenswrapper[4758]: I0130 09:38:56.555065 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qdgq"] Jan 30 09:38:56 crc kubenswrapper[4758]: I0130 09:38:56.845998 4758 generic.go:334] "Generic (PLEG): container finished" podID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerID="d8918a87876ba52cb2581f9ce297e46ecf37f213aa8947f1980878195117fc95" exitCode=0 Jan 30 09:38:56 crc kubenswrapper[4758]: I0130 09:38:56.846075 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qdgq" event={"ID":"8cb0f705-65da-4841-a48b-0dd8e895ee79","Type":"ContainerDied","Data":"d8918a87876ba52cb2581f9ce297e46ecf37f213aa8947f1980878195117fc95"} Jan 30 09:38:56 crc kubenswrapper[4758]: I0130 09:38:56.846357 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qdgq" event={"ID":"8cb0f705-65da-4841-a48b-0dd8e895ee79","Type":"ContainerStarted","Data":"1ddd4b0cc014a730c56bbafd15e932f6bf0bfa269ab021df192b18ce3276f9eb"} Jan 30 09:38:57 crc kubenswrapper[4758]: I0130 09:38:57.855400 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qdgq" event={"ID":"8cb0f705-65da-4841-a48b-0dd8e895ee79","Type":"ContainerStarted","Data":"5d27cbf8f81a66706291c23886ada6ce306f1f0c39f054a93f1814f07bec9400"} Jan 30 09:38:58 crc kubenswrapper[4758]: I0130 09:38:58.865574 4758 generic.go:334] "Generic (PLEG): container finished" podID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerID="5d27cbf8f81a66706291c23886ada6ce306f1f0c39f054a93f1814f07bec9400" exitCode=0 Jan 30 09:38:58 crc kubenswrapper[4758]: I0130 09:38:58.865898 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qdgq" event={"ID":"8cb0f705-65da-4841-a48b-0dd8e895ee79","Type":"ContainerDied","Data":"5d27cbf8f81a66706291c23886ada6ce306f1f0c39f054a93f1814f07bec9400"} Jan 30 09:38:59 crc kubenswrapper[4758]: I0130 09:38:59.875791 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qdgq" event={"ID":"8cb0f705-65da-4841-a48b-0dd8e895ee79","Type":"ContainerStarted","Data":"ac298c9dac818ad35a96cf20883fa275c13fe873d3e61744f0cb8d7965c21c86"} Jan 30 09:38:59 crc kubenswrapper[4758]: I0130 09:38:59.902430 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9qdgq" podStartSLOduration=2.517725361 podStartE2EDuration="4.9024125s" podCreationTimestamp="2026-01-30 09:38:55 +0000 UTC" firstStartedPulling="2026-01-30 09:38:56.847524374 +0000 UTC m=+4141.819835935" lastFinishedPulling="2026-01-30 09:38:59.232211523 +0000 UTC m=+4144.204523074" observedRunningTime="2026-01-30 09:38:59.893345095 +0000 UTC m=+4144.865656666" watchObservedRunningTime="2026-01-30 09:38:59.9024125 +0000 UTC m=+4144.874724051" Jan 30 09:39:05 crc kubenswrapper[4758]: I0130 09:39:05.967860 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:39:05 crc kubenswrapper[4758]: I0130 09:39:05.968430 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:39:06 crc kubenswrapper[4758]: I0130 09:39:06.024166 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:39:07 crc kubenswrapper[4758]: I0130 09:39:07.029146 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:39:07 crc kubenswrapper[4758]: I0130 09:39:07.091578 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qdgq"] Jan 30 09:39:08 crc kubenswrapper[4758]: I0130 09:39:08.957470 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9qdgq" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerName="registry-server" containerID="cri-o://ac298c9dac818ad35a96cf20883fa275c13fe873d3e61744f0cb8d7965c21c86" gracePeriod=2 Jan 30 09:39:09 crc kubenswrapper[4758]: I0130 09:39:09.971496 4758 generic.go:334] "Generic (PLEG): container finished" podID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerID="ac298c9dac818ad35a96cf20883fa275c13fe873d3e61744f0cb8d7965c21c86" exitCode=0 Jan 30 09:39:09 crc kubenswrapper[4758]: I0130 09:39:09.971586 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qdgq" event={"ID":"8cb0f705-65da-4841-a48b-0dd8e895ee79","Type":"ContainerDied","Data":"ac298c9dac818ad35a96cf20883fa275c13fe873d3e61744f0cb8d7965c21c86"} Jan 30 09:39:09 crc kubenswrapper[4758]: I0130 09:39:09.971850 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qdgq" event={"ID":"8cb0f705-65da-4841-a48b-0dd8e895ee79","Type":"ContainerDied","Data":"1ddd4b0cc014a730c56bbafd15e932f6bf0bfa269ab021df192b18ce3276f9eb"} Jan 30 09:39:09 crc kubenswrapper[4758]: I0130 09:39:09.971867 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ddd4b0cc014a730c56bbafd15e932f6bf0bfa269ab021df192b18ce3276f9eb" Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.077071 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.222664 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lvwk\" (UniqueName: \"kubernetes.io/projected/8cb0f705-65da-4841-a48b-0dd8e895ee79-kube-api-access-4lvwk\") pod \"8cb0f705-65da-4841-a48b-0dd8e895ee79\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.222856 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-utilities\") pod \"8cb0f705-65da-4841-a48b-0dd8e895ee79\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.222944 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-catalog-content\") pod \"8cb0f705-65da-4841-a48b-0dd8e895ee79\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.231475 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb0f705-65da-4841-a48b-0dd8e895ee79-kube-api-access-4lvwk" (OuterVolumeSpecName: "kube-api-access-4lvwk") pod "8cb0f705-65da-4841-a48b-0dd8e895ee79" (UID: "8cb0f705-65da-4841-a48b-0dd8e895ee79"). InnerVolumeSpecName "kube-api-access-4lvwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.235264 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-utilities" (OuterVolumeSpecName: "utilities") pod "8cb0f705-65da-4841-a48b-0dd8e895ee79" (UID: "8cb0f705-65da-4841-a48b-0dd8e895ee79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.247676 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cb0f705-65da-4841-a48b-0dd8e895ee79" (UID: "8cb0f705-65da-4841-a48b-0dd8e895ee79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.325591 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lvwk\" (UniqueName: \"kubernetes.io/projected/8cb0f705-65da-4841-a48b-0dd8e895ee79-kube-api-access-4lvwk\") on node \"crc\" DevicePath \"\"" Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.325635 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.325647 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.983764 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:39:11 crc kubenswrapper[4758]: I0130 09:39:11.019456 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qdgq"] Jan 30 09:39:11 crc kubenswrapper[4758]: I0130 09:39:11.029763 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qdgq"] Jan 30 09:39:11 crc kubenswrapper[4758]: I0130 09:39:11.779536 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" path="/var/lib/kubelet/pods/8cb0f705-65da-4841-a48b-0dd8e895ee79/volumes" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.357009 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vd6gl"] Jan 30 09:39:16 crc kubenswrapper[4758]: E0130 09:39:16.358027 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerName="registry-server" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.358075 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerName="registry-server" Jan 30 09:39:16 crc kubenswrapper[4758]: E0130 09:39:16.358091 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerName="extract-content" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.358098 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerName="extract-content" Jan 30 09:39:16 crc kubenswrapper[4758]: E0130 09:39:16.358129 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerName="extract-utilities" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.358137 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerName="extract-utilities" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.358322 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerName="registry-server" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.359621 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.382744 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vd6gl"] Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.440423 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xrvt\" (UniqueName: \"kubernetes.io/projected/d4b53aed-fb8c-44ae-a56e-57109de1e728-kube-api-access-5xrvt\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.440598 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-catalog-content\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.440649 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-utilities\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.542504 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-catalog-content\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.542557 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-utilities\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.542671 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xrvt\" (UniqueName: \"kubernetes.io/projected/d4b53aed-fb8c-44ae-a56e-57109de1e728-kube-api-access-5xrvt\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.543082 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-catalog-content\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.543122 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-utilities\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.571236 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xrvt\" (UniqueName: \"kubernetes.io/projected/d4b53aed-fb8c-44ae-a56e-57109de1e728-kube-api-access-5xrvt\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.678403 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:18 crc kubenswrapper[4758]: I0130 09:39:18.250197 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vd6gl"] Jan 30 09:39:19 crc kubenswrapper[4758]: I0130 09:39:19.046542 4758 generic.go:334] "Generic (PLEG): container finished" podID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerID="5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90" exitCode=0 Jan 30 09:39:19 crc kubenswrapper[4758]: I0130 09:39:19.046545 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd6gl" event={"ID":"d4b53aed-fb8c-44ae-a56e-57109de1e728","Type":"ContainerDied","Data":"5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90"} Jan 30 09:39:19 crc kubenswrapper[4758]: I0130 09:39:19.047077 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd6gl" event={"ID":"d4b53aed-fb8c-44ae-a56e-57109de1e728","Type":"ContainerStarted","Data":"a3647e27c55a8e5e47fb5f033751f32f8b22438687007a22c34f8b75b9fb5247"} Jan 30 09:39:20 crc kubenswrapper[4758]: I0130 09:39:20.056454 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd6gl" event={"ID":"d4b53aed-fb8c-44ae-a56e-57109de1e728","Type":"ContainerStarted","Data":"0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e"} Jan 30 09:39:22 crc kubenswrapper[4758]: I0130 09:39:22.387571 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:39:22 crc kubenswrapper[4758]: I0130 09:39:22.388156 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:39:26 crc kubenswrapper[4758]: I0130 09:39:26.104148 4758 generic.go:334] "Generic (PLEG): container finished" podID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerID="0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e" exitCode=0 Jan 30 09:39:26 crc kubenswrapper[4758]: I0130 09:39:26.104253 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd6gl" event={"ID":"d4b53aed-fb8c-44ae-a56e-57109de1e728","Type":"ContainerDied","Data":"0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e"} Jan 30 09:39:27 crc kubenswrapper[4758]: I0130 09:39:27.115325 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd6gl" event={"ID":"d4b53aed-fb8c-44ae-a56e-57109de1e728","Type":"ContainerStarted","Data":"a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507"} Jan 30 09:39:27 crc kubenswrapper[4758]: I0130 09:39:27.132455 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vd6gl" podStartSLOduration=3.609219744 podStartE2EDuration="11.132434041s" podCreationTimestamp="2026-01-30 09:39:16 +0000 UTC" firstStartedPulling="2026-01-30 09:39:19.048325137 +0000 UTC m=+4164.020636688" lastFinishedPulling="2026-01-30 09:39:26.571539434 +0000 UTC m=+4171.543850985" observedRunningTime="2026-01-30 09:39:27.13050252 +0000 UTC m=+4172.102814091" watchObservedRunningTime="2026-01-30 09:39:27.132434041 +0000 UTC m=+4172.104745592" Jan 30 09:39:36 crc kubenswrapper[4758]: I0130 09:39:36.679770 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:36 crc kubenswrapper[4758]: I0130 09:39:36.680438 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:37 crc kubenswrapper[4758]: I0130 09:39:37.732173 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vd6gl" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="registry-server" probeResult="failure" output=< Jan 30 09:39:37 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:39:37 crc kubenswrapper[4758]: > Jan 30 09:39:41 crc kubenswrapper[4758]: I0130 09:39:41.801385 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0a15517a-ff48-40d1-91b4-442bfef91fc1" containerName="galera" probeResult="failure" output="command timed out" Jan 30 09:39:47 crc kubenswrapper[4758]: I0130 09:39:47.736779 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vd6gl" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="registry-server" probeResult="failure" output=< Jan 30 09:39:47 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:39:47 crc kubenswrapper[4758]: > Jan 30 09:39:52 crc kubenswrapper[4758]: I0130 09:39:52.387577 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:39:52 crc kubenswrapper[4758]: I0130 09:39:52.388275 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:39:57 crc kubenswrapper[4758]: I0130 09:39:57.724266 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vd6gl" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="registry-server" probeResult="failure" output=< Jan 30 09:39:57 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:39:57 crc kubenswrapper[4758]: > Jan 30 09:40:06 crc kubenswrapper[4758]: I0130 09:40:06.738074 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:40:06 crc kubenswrapper[4758]: I0130 09:40:06.810948 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:40:06 crc kubenswrapper[4758]: I0130 09:40:06.977742 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vd6gl"] Jan 30 09:40:07 crc kubenswrapper[4758]: I0130 09:40:07.813460 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vd6gl" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="registry-server" containerID="cri-o://a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507" gracePeriod=2 Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.395447 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.524841 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xrvt\" (UniqueName: \"kubernetes.io/projected/d4b53aed-fb8c-44ae-a56e-57109de1e728-kube-api-access-5xrvt\") pod \"d4b53aed-fb8c-44ae-a56e-57109de1e728\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.524960 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-utilities\") pod \"d4b53aed-fb8c-44ae-a56e-57109de1e728\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.525006 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-catalog-content\") pod \"d4b53aed-fb8c-44ae-a56e-57109de1e728\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.525872 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-utilities" (OuterVolumeSpecName: "utilities") pod "d4b53aed-fb8c-44ae-a56e-57109de1e728" (UID: "d4b53aed-fb8c-44ae-a56e-57109de1e728"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.534892 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b53aed-fb8c-44ae-a56e-57109de1e728-kube-api-access-5xrvt" (OuterVolumeSpecName: "kube-api-access-5xrvt") pod "d4b53aed-fb8c-44ae-a56e-57109de1e728" (UID: "d4b53aed-fb8c-44ae-a56e-57109de1e728"). InnerVolumeSpecName "kube-api-access-5xrvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.627192 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xrvt\" (UniqueName: \"kubernetes.io/projected/d4b53aed-fb8c-44ae-a56e-57109de1e728-kube-api-access-5xrvt\") on node \"crc\" DevicePath \"\"" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.627222 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.647748 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4b53aed-fb8c-44ae-a56e-57109de1e728" (UID: "d4b53aed-fb8c-44ae-a56e-57109de1e728"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.729602 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.823331 4758 generic.go:334] "Generic (PLEG): container finished" podID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerID="a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507" exitCode=0 Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.823367 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd6gl" event={"ID":"d4b53aed-fb8c-44ae-a56e-57109de1e728","Type":"ContainerDied","Data":"a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507"} Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.823397 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.823427 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd6gl" event={"ID":"d4b53aed-fb8c-44ae-a56e-57109de1e728","Type":"ContainerDied","Data":"a3647e27c55a8e5e47fb5f033751f32f8b22438687007a22c34f8b75b9fb5247"} Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.823451 4758 scope.go:117] "RemoveContainer" containerID="a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.849527 4758 scope.go:117] "RemoveContainer" containerID="0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.858376 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vd6gl"] Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.869648 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vd6gl"] Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.881421 4758 scope.go:117] "RemoveContainer" containerID="5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.916844 4758 scope.go:117] "RemoveContainer" containerID="a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507" Jan 30 09:40:08 crc kubenswrapper[4758]: E0130 09:40:08.917310 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507\": container with ID starting with a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507 not found: ID does not exist" containerID="a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.917398 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507"} err="failed to get container status \"a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507\": rpc error: code = NotFound desc = could not find container \"a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507\": container with ID starting with a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507 not found: ID does not exist" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.917489 4758 scope.go:117] "RemoveContainer" containerID="0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e" Jan 30 09:40:08 crc kubenswrapper[4758]: E0130 09:40:08.917986 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e\": container with ID starting with 0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e not found: ID does not exist" containerID="0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.918016 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e"} err="failed to get container status \"0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e\": rpc error: code = NotFound desc = could not find container \"0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e\": container with ID starting with 0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e not found: ID does not exist" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.918049 4758 scope.go:117] "RemoveContainer" containerID="5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90" Jan 30 09:40:08 crc kubenswrapper[4758]: E0130 09:40:08.918280 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90\": container with ID starting with 5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90 not found: ID does not exist" containerID="5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.918361 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90"} err="failed to get container status \"5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90\": rpc error: code = NotFound desc = could not find container \"5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90\": container with ID starting with 5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90 not found: ID does not exist" Jan 30 09:40:09 crc kubenswrapper[4758]: I0130 09:40:09.778953 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" path="/var/lib/kubelet/pods/d4b53aed-fb8c-44ae-a56e-57109de1e728/volumes" Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.387482 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.388440 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.388498 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.389139 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.389208 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" gracePeriod=600 Jan 30 09:40:22 crc kubenswrapper[4758]: E0130 09:40:22.538839 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.951513 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" exitCode=0 Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.951819 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32"} Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.951852 4758 scope.go:117] "RemoveContainer" containerID="f621d5061ef980f9a3d7bdb05f6303bc701eff4159f79eca90a822b5aba48643" Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.952486 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:40:22 crc kubenswrapper[4758]: E0130 09:40:22.952733 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:40:35 crc kubenswrapper[4758]: I0130 09:40:35.774919 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:40:35 crc kubenswrapper[4758]: E0130 09:40:35.775693 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:40:50 crc kubenswrapper[4758]: I0130 09:40:50.768438 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:40:50 crc kubenswrapper[4758]: E0130 09:40:50.769332 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:41:04 crc kubenswrapper[4758]: I0130 09:41:04.769232 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:41:04 crc kubenswrapper[4758]: E0130 09:41:04.770589 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:41:16 crc kubenswrapper[4758]: I0130 09:41:16.768676 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:41:16 crc kubenswrapper[4758]: E0130 09:41:16.769408 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:41:28 crc kubenswrapper[4758]: I0130 09:41:28.768302 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:41:28 crc kubenswrapper[4758]: E0130 09:41:28.769726 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.359601 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2jcgk"] Jan 30 09:41:37 crc kubenswrapper[4758]: E0130 09:41:37.360729 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="registry-server" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.360744 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="registry-server" Jan 30 09:41:37 crc kubenswrapper[4758]: E0130 09:41:37.360757 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="extract-content" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.360763 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="extract-content" Jan 30 09:41:37 crc kubenswrapper[4758]: E0130 09:41:37.360792 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="extract-utilities" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.360800 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="extract-utilities" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.361003 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="registry-server" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.362636 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.371374 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2jcgk"] Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.542477 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-utilities\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.543408 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq7cf\" (UniqueName: \"kubernetes.io/projected/604c3bea-aed3-47c1-907a-6353466ebd3d-kube-api-access-hq7cf\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.543541 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-catalog-content\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.645276 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-utilities\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.645389 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq7cf\" (UniqueName: \"kubernetes.io/projected/604c3bea-aed3-47c1-907a-6353466ebd3d-kube-api-access-hq7cf\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.645419 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-catalog-content\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.645841 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-utilities\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.646296 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-catalog-content\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.666509 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq7cf\" (UniqueName: \"kubernetes.io/projected/604c3bea-aed3-47c1-907a-6353466ebd3d-kube-api-access-hq7cf\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.725399 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:38 crc kubenswrapper[4758]: I0130 09:41:38.302069 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2jcgk"] Jan 30 09:41:38 crc kubenswrapper[4758]: I0130 09:41:38.580119 4758 generic.go:334] "Generic (PLEG): container finished" podID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerID="cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995" exitCode=0 Jan 30 09:41:38 crc kubenswrapper[4758]: I0130 09:41:38.580162 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jcgk" event={"ID":"604c3bea-aed3-47c1-907a-6353466ebd3d","Type":"ContainerDied","Data":"cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995"} Jan 30 09:41:38 crc kubenswrapper[4758]: I0130 09:41:38.580191 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jcgk" event={"ID":"604c3bea-aed3-47c1-907a-6353466ebd3d","Type":"ContainerStarted","Data":"273cfcdb73acd4276ef5b7f53190801a5e0944ad916fb68665013d4d4ff85c64"} Jan 30 09:41:38 crc kubenswrapper[4758]: I0130 09:41:38.581963 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:41:39 crc kubenswrapper[4758]: I0130 09:41:39.768772 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:41:39 crc kubenswrapper[4758]: E0130 09:41:39.769416 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:41:40 crc kubenswrapper[4758]: I0130 09:41:40.617160 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jcgk" event={"ID":"604c3bea-aed3-47c1-907a-6353466ebd3d","Type":"ContainerStarted","Data":"1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7"} Jan 30 09:41:41 crc kubenswrapper[4758]: I0130 09:41:41.627107 4758 generic.go:334] "Generic (PLEG): container finished" podID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerID="1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7" exitCode=0 Jan 30 09:41:41 crc kubenswrapper[4758]: I0130 09:41:41.627209 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jcgk" event={"ID":"604c3bea-aed3-47c1-907a-6353466ebd3d","Type":"ContainerDied","Data":"1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7"} Jan 30 09:41:42 crc kubenswrapper[4758]: I0130 09:41:42.641735 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jcgk" event={"ID":"604c3bea-aed3-47c1-907a-6353466ebd3d","Type":"ContainerStarted","Data":"1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19"} Jan 30 09:41:42 crc kubenswrapper[4758]: I0130 09:41:42.681067 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2jcgk" podStartSLOduration=2.230496544 podStartE2EDuration="5.681025762s" podCreationTimestamp="2026-01-30 09:41:37 +0000 UTC" firstStartedPulling="2026-01-30 09:41:38.581712784 +0000 UTC m=+4303.554024345" lastFinishedPulling="2026-01-30 09:41:42.032242022 +0000 UTC m=+4307.004553563" observedRunningTime="2026-01-30 09:41:42.66413699 +0000 UTC m=+4307.636448611" watchObservedRunningTime="2026-01-30 09:41:42.681025762 +0000 UTC m=+4307.653337323" Jan 30 09:41:47 crc kubenswrapper[4758]: I0130 09:41:47.726802 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:47 crc kubenswrapper[4758]: I0130 09:41:47.727372 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:47 crc kubenswrapper[4758]: I0130 09:41:47.793276 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:48 crc kubenswrapper[4758]: I0130 09:41:48.738500 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:48 crc kubenswrapper[4758]: I0130 09:41:48.788906 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2jcgk"] Jan 30 09:41:50 crc kubenswrapper[4758]: I0130 09:41:50.713845 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2jcgk" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerName="registry-server" containerID="cri-o://1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19" gracePeriod=2 Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.186828 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.296723 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-catalog-content\") pod \"604c3bea-aed3-47c1-907a-6353466ebd3d\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.296808 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq7cf\" (UniqueName: \"kubernetes.io/projected/604c3bea-aed3-47c1-907a-6353466ebd3d-kube-api-access-hq7cf\") pod \"604c3bea-aed3-47c1-907a-6353466ebd3d\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.297315 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-utilities\") pod \"604c3bea-aed3-47c1-907a-6353466ebd3d\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.298166 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-utilities" (OuterVolumeSpecName: "utilities") pod "604c3bea-aed3-47c1-907a-6353466ebd3d" (UID: "604c3bea-aed3-47c1-907a-6353466ebd3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.307376 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604c3bea-aed3-47c1-907a-6353466ebd3d-kube-api-access-hq7cf" (OuterVolumeSpecName: "kube-api-access-hq7cf") pod "604c3bea-aed3-47c1-907a-6353466ebd3d" (UID: "604c3bea-aed3-47c1-907a-6353466ebd3d"). InnerVolumeSpecName "kube-api-access-hq7cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.399791 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.399832 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq7cf\" (UniqueName: \"kubernetes.io/projected/604c3bea-aed3-47c1-907a-6353466ebd3d-kube-api-access-hq7cf\") on node \"crc\" DevicePath \"\"" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.725174 4758 generic.go:334] "Generic (PLEG): container finished" podID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerID="1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19" exitCode=0 Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.725216 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jcgk" event={"ID":"604c3bea-aed3-47c1-907a-6353466ebd3d","Type":"ContainerDied","Data":"1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19"} Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.725240 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jcgk" event={"ID":"604c3bea-aed3-47c1-907a-6353466ebd3d","Type":"ContainerDied","Data":"273cfcdb73acd4276ef5b7f53190801a5e0944ad916fb68665013d4d4ff85c64"} Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.725236 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.725255 4758 scope.go:117] "RemoveContainer" containerID="1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.758974 4758 scope.go:117] "RemoveContainer" containerID="1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.791848 4758 scope.go:117] "RemoveContainer" containerID="cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.842598 4758 scope.go:117] "RemoveContainer" containerID="1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19" Jan 30 09:41:51 crc kubenswrapper[4758]: E0130 09:41:51.843885 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19\": container with ID starting with 1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19 not found: ID does not exist" containerID="1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.843925 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19"} err="failed to get container status \"1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19\": rpc error: code = NotFound desc = could not find container \"1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19\": container with ID starting with 1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19 not found: ID does not exist" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.843949 4758 scope.go:117] "RemoveContainer" containerID="1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7" Jan 30 09:41:51 crc kubenswrapper[4758]: E0130 09:41:51.844229 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7\": container with ID starting with 1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7 not found: ID does not exist" containerID="1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.844256 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7"} err="failed to get container status \"1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7\": rpc error: code = NotFound desc = could not find container \"1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7\": container with ID starting with 1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7 not found: ID does not exist" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.844273 4758 scope.go:117] "RemoveContainer" containerID="cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995" Jan 30 09:41:51 crc kubenswrapper[4758]: E0130 09:41:51.844761 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995\": container with ID starting with cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995 not found: ID does not exist" containerID="cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.844791 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995"} err="failed to get container status \"cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995\": rpc error: code = NotFound desc = could not find container \"cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995\": container with ID starting with cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995 not found: ID does not exist" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.864803 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "604c3bea-aed3-47c1-907a-6353466ebd3d" (UID: "604c3bea-aed3-47c1-907a-6353466ebd3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.915463 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:41:52 crc kubenswrapper[4758]: I0130 09:41:52.054756 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2jcgk"] Jan 30 09:41:52 crc kubenswrapper[4758]: I0130 09:41:52.065238 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2jcgk"] Jan 30 09:41:53 crc kubenswrapper[4758]: I0130 09:41:53.781559 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" path="/var/lib/kubelet/pods/604c3bea-aed3-47c1-907a-6353466ebd3d/volumes" Jan 30 09:41:54 crc kubenswrapper[4758]: I0130 09:41:54.768348 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:41:54 crc kubenswrapper[4758]: E0130 09:41:54.768867 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:42:05 crc kubenswrapper[4758]: I0130 09:42:05.774522 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:42:05 crc kubenswrapper[4758]: E0130 09:42:05.775358 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:42:16 crc kubenswrapper[4758]: I0130 09:42:16.768026 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:42:16 crc kubenswrapper[4758]: E0130 09:42:16.769785 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:42:31 crc kubenswrapper[4758]: I0130 09:42:31.769114 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:42:31 crc kubenswrapper[4758]: E0130 09:42:31.769902 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:42:43 crc kubenswrapper[4758]: I0130 09:42:43.769407 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:42:43 crc kubenswrapper[4758]: E0130 09:42:43.770273 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:42:58 crc kubenswrapper[4758]: I0130 09:42:58.770019 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:42:58 crc kubenswrapper[4758]: E0130 09:42:58.770903 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:43:10 crc kubenswrapper[4758]: I0130 09:43:10.769297 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:43:10 crc kubenswrapper[4758]: E0130 09:43:10.770352 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:43:23 crc kubenswrapper[4758]: I0130 09:43:23.769112 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:43:23 crc kubenswrapper[4758]: E0130 09:43:23.770116 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:43:38 crc kubenswrapper[4758]: I0130 09:43:38.768596 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:43:38 crc kubenswrapper[4758]: E0130 09:43:38.769315 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:43:50 crc kubenswrapper[4758]: I0130 09:43:50.768596 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:43:50 crc kubenswrapper[4758]: E0130 09:43:50.769521 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:44:04 crc kubenswrapper[4758]: I0130 09:44:04.769871 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:44:04 crc kubenswrapper[4758]: E0130 09:44:04.770708 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:44:17 crc kubenswrapper[4758]: I0130 09:44:17.768456 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:44:17 crc kubenswrapper[4758]: E0130 09:44:17.769314 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:44:28 crc kubenswrapper[4758]: I0130 09:44:28.769101 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:44:28 crc kubenswrapper[4758]: E0130 09:44:28.769945 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:44:43 crc kubenswrapper[4758]: I0130 09:44:43.768488 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:44:43 crc kubenswrapper[4758]: E0130 09:44:43.769309 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:44:55 crc kubenswrapper[4758]: I0130 09:44:55.778125 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:44:55 crc kubenswrapper[4758]: E0130 09:44:55.779332 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.177698 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9"] Jan 30 09:45:00 crc kubenswrapper[4758]: E0130 09:45:00.178617 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerName="extract-utilities" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.178633 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerName="extract-utilities" Jan 30 09:45:00 crc kubenswrapper[4758]: E0130 09:45:00.178650 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerName="extract-content" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.178658 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerName="extract-content" Jan 30 09:45:00 crc kubenswrapper[4758]: E0130 09:45:00.178681 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerName="registry-server" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.178689 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerName="registry-server" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.178922 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerName="registry-server" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.179696 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.182197 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.182227 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.201264 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9"] Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.287901 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrktq\" (UniqueName: \"kubernetes.io/projected/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-kube-api-access-nrktq\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.288451 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-config-volume\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.288649 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-secret-volume\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.390671 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrktq\" (UniqueName: \"kubernetes.io/projected/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-kube-api-access-nrktq\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.390929 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-config-volume\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.391014 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-secret-volume\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.391931 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-config-volume\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.403170 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-secret-volume\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.412539 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrktq\" (UniqueName: \"kubernetes.io/projected/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-kube-api-access-nrktq\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.499147 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.946477 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9"] Jan 30 09:45:01 crc kubenswrapper[4758]: I0130 09:45:01.507843 4758 generic.go:334] "Generic (PLEG): container finished" podID="2e2ef399-bb23-4c1b-8a4c-baaa621c5c40" containerID="93889ac7c0f1bc37603be56029065806c7d4dfddcb4b8e399430cc4b379e1169" exitCode=0 Jan 30 09:45:01 crc kubenswrapper[4758]: I0130 09:45:01.507889 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" event={"ID":"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40","Type":"ContainerDied","Data":"93889ac7c0f1bc37603be56029065806c7d4dfddcb4b8e399430cc4b379e1169"} Jan 30 09:45:01 crc kubenswrapper[4758]: I0130 09:45:01.508181 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" event={"ID":"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40","Type":"ContainerStarted","Data":"28a7f75c279992a219a407e21608e01b03e79c103971aa64a29c9a234f414260"} Jan 30 09:45:02 crc kubenswrapper[4758]: I0130 09:45:02.913509 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.044989 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-config-volume\") pod \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.045208 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-secret-volume\") pod \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.045239 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrktq\" (UniqueName: \"kubernetes.io/projected/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-kube-api-access-nrktq\") pod \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.046670 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-config-volume" (OuterVolumeSpecName: "config-volume") pod "2e2ef399-bb23-4c1b-8a4c-baaa621c5c40" (UID: "2e2ef399-bb23-4c1b-8a4c-baaa621c5c40"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.053169 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-kube-api-access-nrktq" (OuterVolumeSpecName: "kube-api-access-nrktq") pod "2e2ef399-bb23-4c1b-8a4c-baaa621c5c40" (UID: "2e2ef399-bb23-4c1b-8a4c-baaa621c5c40"). InnerVolumeSpecName "kube-api-access-nrktq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.053465 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2e2ef399-bb23-4c1b-8a4c-baaa621c5c40" (UID: "2e2ef399-bb23-4c1b-8a4c-baaa621c5c40"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.147719 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.147769 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrktq\" (UniqueName: \"kubernetes.io/projected/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-kube-api-access-nrktq\") on node \"crc\" DevicePath \"\"" Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.147778 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.524775 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" event={"ID":"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40","Type":"ContainerDied","Data":"28a7f75c279992a219a407e21608e01b03e79c103971aa64a29c9a234f414260"} Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.524821 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.525030 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28a7f75c279992a219a407e21608e01b03e79c103971aa64a29c9a234f414260" Jan 30 09:45:04 crc kubenswrapper[4758]: I0130 09:45:04.008598 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx"] Jan 30 09:45:04 crc kubenswrapper[4758]: I0130 09:45:04.016392 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx"] Jan 30 09:45:05 crc kubenswrapper[4758]: I0130 09:45:05.781329 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8b22039-92b3-4488-a971-2913dedb64ed" path="/var/lib/kubelet/pods/e8b22039-92b3-4488-a971-2913dedb64ed/volumes" Jan 30 09:45:06 crc kubenswrapper[4758]: I0130 09:45:06.769151 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:45:06 crc kubenswrapper[4758]: E0130 09:45:06.769670 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:45:11 crc kubenswrapper[4758]: I0130 09:45:11.037262 4758 scope.go:117] "RemoveContainer" containerID="5d27cbf8f81a66706291c23886ada6ce306f1f0c39f054a93f1814f07bec9400" Jan 30 09:45:11 crc kubenswrapper[4758]: I0130 09:45:11.060184 4758 scope.go:117] "RemoveContainer" containerID="d8918a87876ba52cb2581f9ce297e46ecf37f213aa8947f1980878195117fc95" Jan 30 09:45:11 crc kubenswrapper[4758]: I0130 09:45:11.120222 4758 scope.go:117] "RemoveContainer" containerID="ac298c9dac818ad35a96cf20883fa275c13fe873d3e61744f0cb8d7965c21c86" Jan 30 09:45:11 crc kubenswrapper[4758]: I0130 09:45:11.159835 4758 scope.go:117] "RemoveContainer" containerID="04e032ddde7c1d4a35b13c1c04e206e7ca345c709ebf2a8892f46a64701c926c" Jan 30 09:45:18 crc kubenswrapper[4758]: I0130 09:45:18.768871 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:45:18 crc kubenswrapper[4758]: E0130 09:45:18.769628 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:45:30 crc kubenswrapper[4758]: I0130 09:45:30.769909 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:45:31 crc kubenswrapper[4758]: I0130 09:45:31.791618 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"b8ffbf2a6303e1bf0dae7b91841734487c9b9b731418440f9ba74df848313204"} Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.729357 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vmj29"] Jan 30 09:46:11 crc kubenswrapper[4758]: E0130 09:46:11.730379 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2ef399-bb23-4c1b-8a4c-baaa621c5c40" containerName="collect-profiles" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.730397 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2ef399-bb23-4c1b-8a4c-baaa621c5c40" containerName="collect-profiles" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.730717 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e2ef399-bb23-4c1b-8a4c-baaa621c5c40" containerName="collect-profiles" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.732383 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.749018 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmj29"] Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.807984 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs5k4\" (UniqueName: \"kubernetes.io/projected/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-kube-api-access-qs5k4\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.808302 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-utilities\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.808331 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-catalog-content\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.909579 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-utilities\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.909652 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-catalog-content\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.909755 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs5k4\" (UniqueName: \"kubernetes.io/projected/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-kube-api-access-qs5k4\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.910392 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-utilities\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.910392 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-catalog-content\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.935942 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs5k4\" (UniqueName: \"kubernetes.io/projected/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-kube-api-access-qs5k4\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:12 crc kubenswrapper[4758]: I0130 09:46:12.051341 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:12 crc kubenswrapper[4758]: I0130 09:46:12.663912 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmj29"] Jan 30 09:46:13 crc kubenswrapper[4758]: I0130 09:46:13.134830 4758 generic.go:334] "Generic (PLEG): container finished" podID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerID="08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc" exitCode=0 Jan 30 09:46:13 crc kubenswrapper[4758]: I0130 09:46:13.134894 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmj29" event={"ID":"f58c80cf-e686-4eb0-bd9f-40aa842e2c34","Type":"ContainerDied","Data":"08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc"} Jan 30 09:46:13 crc kubenswrapper[4758]: I0130 09:46:13.135198 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmj29" event={"ID":"f58c80cf-e686-4eb0-bd9f-40aa842e2c34","Type":"ContainerStarted","Data":"b0a7e44dbba51b78ce0570a5f21cc47fc7c7673c55ce981aae9b66a918c2e48a"} Jan 30 09:46:14 crc kubenswrapper[4758]: I0130 09:46:14.146368 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmj29" event={"ID":"f58c80cf-e686-4eb0-bd9f-40aa842e2c34","Type":"ContainerStarted","Data":"96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e"} Jan 30 09:46:16 crc kubenswrapper[4758]: I0130 09:46:16.166819 4758 generic.go:334] "Generic (PLEG): container finished" podID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerID="96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e" exitCode=0 Jan 30 09:46:16 crc kubenswrapper[4758]: I0130 09:46:16.166882 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmj29" event={"ID":"f58c80cf-e686-4eb0-bd9f-40aa842e2c34","Type":"ContainerDied","Data":"96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e"} Jan 30 09:46:17 crc kubenswrapper[4758]: I0130 09:46:17.177719 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmj29" event={"ID":"f58c80cf-e686-4eb0-bd9f-40aa842e2c34","Type":"ContainerStarted","Data":"099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12"} Jan 30 09:46:17 crc kubenswrapper[4758]: I0130 09:46:17.207602 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vmj29" podStartSLOduration=2.750898953 podStartE2EDuration="6.207582231s" podCreationTimestamp="2026-01-30 09:46:11 +0000 UTC" firstStartedPulling="2026-01-30 09:46:13.136943986 +0000 UTC m=+4578.109255537" lastFinishedPulling="2026-01-30 09:46:16.593627264 +0000 UTC m=+4581.565938815" observedRunningTime="2026-01-30 09:46:17.196373578 +0000 UTC m=+4582.168685159" watchObservedRunningTime="2026-01-30 09:46:17.207582231 +0000 UTC m=+4582.179893812" Jan 30 09:46:22 crc kubenswrapper[4758]: I0130 09:46:22.052234 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:22 crc kubenswrapper[4758]: I0130 09:46:22.052956 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:23 crc kubenswrapper[4758]: I0130 09:46:23.206416 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vmj29" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="registry-server" probeResult="failure" output=< Jan 30 09:46:23 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:46:23 crc kubenswrapper[4758]: > Jan 30 09:46:32 crc kubenswrapper[4758]: I0130 09:46:32.174923 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:32 crc kubenswrapper[4758]: I0130 09:46:32.223278 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:33 crc kubenswrapper[4758]: I0130 09:46:33.093030 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmj29"] Jan 30 09:46:33 crc kubenswrapper[4758]: I0130 09:46:33.327324 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vmj29" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="registry-server" containerID="cri-o://099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12" gracePeriod=2 Jan 30 09:46:33 crc kubenswrapper[4758]: I0130 09:46:33.857355 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.016377 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-utilities\") pod \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.016511 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs5k4\" (UniqueName: \"kubernetes.io/projected/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-kube-api-access-qs5k4\") pod \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.016582 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-catalog-content\") pod \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.017539 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-utilities" (OuterVolumeSpecName: "utilities") pod "f58c80cf-e686-4eb0-bd9f-40aa842e2c34" (UID: "f58c80cf-e686-4eb0-bd9f-40aa842e2c34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.020786 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.022593 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-kube-api-access-qs5k4" (OuterVolumeSpecName: "kube-api-access-qs5k4") pod "f58c80cf-e686-4eb0-bd9f-40aa842e2c34" (UID: "f58c80cf-e686-4eb0-bd9f-40aa842e2c34"). InnerVolumeSpecName "kube-api-access-qs5k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.097057 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f58c80cf-e686-4eb0-bd9f-40aa842e2c34" (UID: "f58c80cf-e686-4eb0-bd9f-40aa842e2c34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.122732 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs5k4\" (UniqueName: \"kubernetes.io/projected/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-kube-api-access-qs5k4\") on node \"crc\" DevicePath \"\"" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.122765 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.338104 4758 generic.go:334] "Generic (PLEG): container finished" podID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerID="099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12" exitCode=0 Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.338178 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmj29" event={"ID":"f58c80cf-e686-4eb0-bd9f-40aa842e2c34","Type":"ContainerDied","Data":"099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12"} Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.338228 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmj29" event={"ID":"f58c80cf-e686-4eb0-bd9f-40aa842e2c34","Type":"ContainerDied","Data":"b0a7e44dbba51b78ce0570a5f21cc47fc7c7673c55ce981aae9b66a918c2e48a"} Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.338228 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.338265 4758 scope.go:117] "RemoveContainer" containerID="099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.363435 4758 scope.go:117] "RemoveContainer" containerID="96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.386287 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmj29"] Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.389413 4758 scope.go:117] "RemoveContainer" containerID="08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.406148 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vmj29"] Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.429012 4758 scope.go:117] "RemoveContainer" containerID="099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12" Jan 30 09:46:34 crc kubenswrapper[4758]: E0130 09:46:34.429464 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12\": container with ID starting with 099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12 not found: ID does not exist" containerID="099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.429511 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12"} err="failed to get container status \"099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12\": rpc error: code = NotFound desc = could not find container \"099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12\": container with ID starting with 099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12 not found: ID does not exist" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.429540 4758 scope.go:117] "RemoveContainer" containerID="96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e" Jan 30 09:46:34 crc kubenswrapper[4758]: E0130 09:46:34.429791 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e\": container with ID starting with 96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e not found: ID does not exist" containerID="96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.429810 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e"} err="failed to get container status \"96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e\": rpc error: code = NotFound desc = could not find container \"96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e\": container with ID starting with 96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e not found: ID does not exist" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.429822 4758 scope.go:117] "RemoveContainer" containerID="08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc" Jan 30 09:46:34 crc kubenswrapper[4758]: E0130 09:46:34.429988 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc\": container with ID starting with 08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc not found: ID does not exist" containerID="08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.430006 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc"} err="failed to get container status \"08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc\": rpc error: code = NotFound desc = could not find container \"08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc\": container with ID starting with 08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc not found: ID does not exist" Jan 30 09:46:35 crc kubenswrapper[4758]: I0130 09:46:35.779352 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" path="/var/lib/kubelet/pods/f58c80cf-e686-4eb0-bd9f-40aa842e2c34/volumes" Jan 30 09:47:52 crc kubenswrapper[4758]: I0130 09:47:52.387216 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:47:52 crc kubenswrapper[4758]: I0130 09:47:52.388950 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:48:22 crc kubenswrapper[4758]: I0130 09:48:22.387894 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:48:22 crc kubenswrapper[4758]: I0130 09:48:22.389942 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:48:52 crc kubenswrapper[4758]: I0130 09:48:52.387460 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:48:52 crc kubenswrapper[4758]: I0130 09:48:52.388152 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:48:52 crc kubenswrapper[4758]: I0130 09:48:52.388210 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:48:52 crc kubenswrapper[4758]: I0130 09:48:52.388984 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8ffbf2a6303e1bf0dae7b91841734487c9b9b731418440f9ba74df848313204"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:48:52 crc kubenswrapper[4758]: I0130 09:48:52.389059 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://b8ffbf2a6303e1bf0dae7b91841734487c9b9b731418440f9ba74df848313204" gracePeriod=600 Jan 30 09:48:53 crc kubenswrapper[4758]: I0130 09:48:53.485419 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="b8ffbf2a6303e1bf0dae7b91841734487c9b9b731418440f9ba74df848313204" exitCode=0 Jan 30 09:48:53 crc kubenswrapper[4758]: I0130 09:48:53.485487 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"b8ffbf2a6303e1bf0dae7b91841734487c9b9b731418440f9ba74df848313204"} Jan 30 09:48:53 crc kubenswrapper[4758]: I0130 09:48:53.486054 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3"} Jan 30 09:48:53 crc kubenswrapper[4758]: I0130 09:48:53.486081 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.923961 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hkwlg"] Jan 30 09:50:22 crc kubenswrapper[4758]: E0130 09:50:22.925007 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="registry-server" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.925023 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="registry-server" Jan 30 09:50:22 crc kubenswrapper[4758]: E0130 09:50:22.925057 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="extract-utilities" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.925067 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="extract-utilities" Jan 30 09:50:22 crc kubenswrapper[4758]: E0130 09:50:22.925082 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="extract-content" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.925093 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="extract-content" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.925344 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="registry-server" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.927127 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.951345 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-utilities\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.951745 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8brr\" (UniqueName: \"kubernetes.io/projected/bb4421b8-2d05-42aa-823d-95f4d4704c94-kube-api-access-r8brr\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.951780 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-catalog-content\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.965776 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkwlg"] Jan 30 09:50:23 crc kubenswrapper[4758]: I0130 09:50:23.053833 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-utilities\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:23 crc kubenswrapper[4758]: I0130 09:50:23.054010 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8brr\" (UniqueName: \"kubernetes.io/projected/bb4421b8-2d05-42aa-823d-95f4d4704c94-kube-api-access-r8brr\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:23 crc kubenswrapper[4758]: I0130 09:50:23.054052 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-catalog-content\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:23 crc kubenswrapper[4758]: I0130 09:50:23.054398 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-utilities\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:23 crc kubenswrapper[4758]: I0130 09:50:23.054463 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-catalog-content\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:23 crc kubenswrapper[4758]: I0130 09:50:23.096549 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8brr\" (UniqueName: \"kubernetes.io/projected/bb4421b8-2d05-42aa-823d-95f4d4704c94-kube-api-access-r8brr\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:23 crc kubenswrapper[4758]: I0130 09:50:23.264306 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:23 crc kubenswrapper[4758]: I0130 09:50:23.779164 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkwlg"] Jan 30 09:50:24 crc kubenswrapper[4758]: I0130 09:50:24.261170 4758 generic.go:334] "Generic (PLEG): container finished" podID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerID="35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307" exitCode=0 Jan 30 09:50:24 crc kubenswrapper[4758]: I0130 09:50:24.261488 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkwlg" event={"ID":"bb4421b8-2d05-42aa-823d-95f4d4704c94","Type":"ContainerDied","Data":"35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307"} Jan 30 09:50:24 crc kubenswrapper[4758]: I0130 09:50:24.261567 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkwlg" event={"ID":"bb4421b8-2d05-42aa-823d-95f4d4704c94","Type":"ContainerStarted","Data":"d3f3f343b9c85fddae57464a7d2d4977951dbba479525623971ffc2368fc7490"} Jan 30 09:50:24 crc kubenswrapper[4758]: I0130 09:50:24.262834 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.271568 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkwlg" event={"ID":"bb4421b8-2d05-42aa-823d-95f4d4704c94","Type":"ContainerStarted","Data":"2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093"} Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.292616 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tn9df"] Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.294402 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.323405 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tn9df"] Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.396728 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-utilities\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.396792 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-catalog-content\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.397052 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st2dt\" (UniqueName: \"kubernetes.io/projected/41b18707-50a1-4f70-876c-4116e4970b68-kube-api-access-st2dt\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.498473 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-utilities\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.498526 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-catalog-content\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.498591 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st2dt\" (UniqueName: \"kubernetes.io/projected/41b18707-50a1-4f70-876c-4116e4970b68-kube-api-access-st2dt\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.499148 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-catalog-content\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.499229 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-utilities\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.525084 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st2dt\" (UniqueName: \"kubernetes.io/projected/41b18707-50a1-4f70-876c-4116e4970b68-kube-api-access-st2dt\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.614895 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:26 crc kubenswrapper[4758]: I0130 09:50:26.177645 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tn9df"] Jan 30 09:50:26 crc kubenswrapper[4758]: I0130 09:50:26.280713 4758 generic.go:334] "Generic (PLEG): container finished" podID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerID="2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093" exitCode=0 Jan 30 09:50:26 crc kubenswrapper[4758]: I0130 09:50:26.280772 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkwlg" event={"ID":"bb4421b8-2d05-42aa-823d-95f4d4704c94","Type":"ContainerDied","Data":"2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093"} Jan 30 09:50:26 crc kubenswrapper[4758]: I0130 09:50:26.283435 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9df" event={"ID":"41b18707-50a1-4f70-876c-4116e4970b68","Type":"ContainerStarted","Data":"c74716ef804e2d84332b2c217c4e78dacce0653d9f0c63fd9aeb6ad91f9e76a9"} Jan 30 09:50:27 crc kubenswrapper[4758]: I0130 09:50:27.292889 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkwlg" event={"ID":"bb4421b8-2d05-42aa-823d-95f4d4704c94","Type":"ContainerStarted","Data":"51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2"} Jan 30 09:50:27 crc kubenswrapper[4758]: I0130 09:50:27.295678 4758 generic.go:334] "Generic (PLEG): container finished" podID="41b18707-50a1-4f70-876c-4116e4970b68" containerID="5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21" exitCode=0 Jan 30 09:50:27 crc kubenswrapper[4758]: I0130 09:50:27.295709 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9df" event={"ID":"41b18707-50a1-4f70-876c-4116e4970b68","Type":"ContainerDied","Data":"5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21"} Jan 30 09:50:27 crc kubenswrapper[4758]: I0130 09:50:27.312413 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hkwlg" podStartSLOduration=2.730337418 podStartE2EDuration="5.312394519s" podCreationTimestamp="2026-01-30 09:50:22 +0000 UTC" firstStartedPulling="2026-01-30 09:50:24.262602647 +0000 UTC m=+4829.234914198" lastFinishedPulling="2026-01-30 09:50:26.844659748 +0000 UTC m=+4831.816971299" observedRunningTime="2026-01-30 09:50:27.310458889 +0000 UTC m=+4832.282770440" watchObservedRunningTime="2026-01-30 09:50:27.312394519 +0000 UTC m=+4832.284706070" Jan 30 09:50:28 crc kubenswrapper[4758]: I0130 09:50:28.322371 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9df" event={"ID":"41b18707-50a1-4f70-876c-4116e4970b68","Type":"ContainerStarted","Data":"9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e"} Jan 30 09:50:33 crc kubenswrapper[4758]: I0130 09:50:33.265126 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:33 crc kubenswrapper[4758]: I0130 09:50:33.265714 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:33 crc kubenswrapper[4758]: I0130 09:50:33.350115 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:33 crc kubenswrapper[4758]: I0130 09:50:33.558338 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:33 crc kubenswrapper[4758]: I0130 09:50:33.631832 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkwlg"] Jan 30 09:50:35 crc kubenswrapper[4758]: I0130 09:50:35.376768 4758 generic.go:334] "Generic (PLEG): container finished" podID="41b18707-50a1-4f70-876c-4116e4970b68" containerID="9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e" exitCode=0 Jan 30 09:50:35 crc kubenswrapper[4758]: I0130 09:50:35.376861 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9df" event={"ID":"41b18707-50a1-4f70-876c-4116e4970b68","Type":"ContainerDied","Data":"9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e"} Jan 30 09:50:35 crc kubenswrapper[4758]: I0130 09:50:35.377300 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hkwlg" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerName="registry-server" containerID="cri-o://51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2" gracePeriod=2 Jan 30 09:50:35 crc kubenswrapper[4758]: I0130 09:50:35.964424 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.006694 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8brr\" (UniqueName: \"kubernetes.io/projected/bb4421b8-2d05-42aa-823d-95f4d4704c94-kube-api-access-r8brr\") pod \"bb4421b8-2d05-42aa-823d-95f4d4704c94\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.007067 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-catalog-content\") pod \"bb4421b8-2d05-42aa-823d-95f4d4704c94\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.007140 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-utilities\") pod \"bb4421b8-2d05-42aa-823d-95f4d4704c94\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.012730 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-utilities" (OuterVolumeSpecName: "utilities") pod "bb4421b8-2d05-42aa-823d-95f4d4704c94" (UID: "bb4421b8-2d05-42aa-823d-95f4d4704c94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.029233 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb4421b8-2d05-42aa-823d-95f4d4704c94" (UID: "bb4421b8-2d05-42aa-823d-95f4d4704c94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.035264 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb4421b8-2d05-42aa-823d-95f4d4704c94-kube-api-access-r8brr" (OuterVolumeSpecName: "kube-api-access-r8brr") pod "bb4421b8-2d05-42aa-823d-95f4d4704c94" (UID: "bb4421b8-2d05-42aa-823d-95f4d4704c94"). InnerVolumeSpecName "kube-api-access-r8brr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.108820 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8brr\" (UniqueName: \"kubernetes.io/projected/bb4421b8-2d05-42aa-823d-95f4d4704c94-kube-api-access-r8brr\") on node \"crc\" DevicePath \"\"" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.108851 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.108861 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.388511 4758 generic.go:334] "Generic (PLEG): container finished" podID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerID="51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2" exitCode=0 Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.388577 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.388592 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkwlg" event={"ID":"bb4421b8-2d05-42aa-823d-95f4d4704c94","Type":"ContainerDied","Data":"51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2"} Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.388934 4758 scope.go:117] "RemoveContainer" containerID="51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.388640 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkwlg" event={"ID":"bb4421b8-2d05-42aa-823d-95f4d4704c94","Type":"ContainerDied","Data":"d3f3f343b9c85fddae57464a7d2d4977951dbba479525623971ffc2368fc7490"} Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.393220 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9df" event={"ID":"41b18707-50a1-4f70-876c-4116e4970b68","Type":"ContainerStarted","Data":"80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574"} Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.414562 4758 scope.go:117] "RemoveContainer" containerID="2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.420552 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tn9df" podStartSLOduration=2.871121947 podStartE2EDuration="11.420533221s" podCreationTimestamp="2026-01-30 09:50:25 +0000 UTC" firstStartedPulling="2026-01-30 09:50:27.298217753 +0000 UTC m=+4832.270529304" lastFinishedPulling="2026-01-30 09:50:35.847629027 +0000 UTC m=+4840.819940578" observedRunningTime="2026-01-30 09:50:36.418986832 +0000 UTC m=+4841.391298413" watchObservedRunningTime="2026-01-30 09:50:36.420533221 +0000 UTC m=+4841.392844772" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.450517 4758 scope.go:117] "RemoveContainer" containerID="35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.456528 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkwlg"] Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.467782 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkwlg"] Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.473376 4758 scope.go:117] "RemoveContainer" containerID="51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2" Jan 30 09:50:36 crc kubenswrapper[4758]: E0130 09:50:36.473726 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2\": container with ID starting with 51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2 not found: ID does not exist" containerID="51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.473756 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2"} err="failed to get container status \"51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2\": rpc error: code = NotFound desc = could not find container \"51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2\": container with ID starting with 51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2 not found: ID does not exist" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.473776 4758 scope.go:117] "RemoveContainer" containerID="2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093" Jan 30 09:50:36 crc kubenswrapper[4758]: E0130 09:50:36.474105 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093\": container with ID starting with 2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093 not found: ID does not exist" containerID="2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.474137 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093"} err="failed to get container status \"2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093\": rpc error: code = NotFound desc = could not find container \"2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093\": container with ID starting with 2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093 not found: ID does not exist" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.474183 4758 scope.go:117] "RemoveContainer" containerID="35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307" Jan 30 09:50:36 crc kubenswrapper[4758]: E0130 09:50:36.474487 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307\": container with ID starting with 35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307 not found: ID does not exist" containerID="35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.474516 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307"} err="failed to get container status \"35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307\": rpc error: code = NotFound desc = could not find container \"35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307\": container with ID starting with 35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307 not found: ID does not exist" Jan 30 09:50:37 crc kubenswrapper[4758]: I0130 09:50:37.780435 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" path="/var/lib/kubelet/pods/bb4421b8-2d05-42aa-823d-95f4d4704c94/volumes" Jan 30 09:50:45 crc kubenswrapper[4758]: I0130 09:50:45.616660 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:45 crc kubenswrapper[4758]: I0130 09:50:45.617275 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:46 crc kubenswrapper[4758]: I0130 09:50:46.658903 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tn9df" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="registry-server" probeResult="failure" output=< Jan 30 09:50:46 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:50:46 crc kubenswrapper[4758]: > Jan 30 09:50:52 crc kubenswrapper[4758]: I0130 09:50:52.387818 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:50:52 crc kubenswrapper[4758]: I0130 09:50:52.388407 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:50:56 crc kubenswrapper[4758]: I0130 09:50:56.658012 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tn9df" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="registry-server" probeResult="failure" output=< Jan 30 09:50:56 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:50:56 crc kubenswrapper[4758]: > Jan 30 09:51:06 crc kubenswrapper[4758]: I0130 09:51:06.663615 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tn9df" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="registry-server" probeResult="failure" output=< Jan 30 09:51:06 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:51:06 crc kubenswrapper[4758]: > Jan 30 09:51:15 crc kubenswrapper[4758]: I0130 09:51:15.662761 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:51:15 crc kubenswrapper[4758]: I0130 09:51:15.712857 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:51:15 crc kubenswrapper[4758]: I0130 09:51:15.902768 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tn9df"] Jan 30 09:51:16 crc kubenswrapper[4758]: I0130 09:51:16.858997 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tn9df" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="registry-server" containerID="cri-o://80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574" gracePeriod=2 Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.433964 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.618761 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-utilities\") pod \"41b18707-50a1-4f70-876c-4116e4970b68\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.618903 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st2dt\" (UniqueName: \"kubernetes.io/projected/41b18707-50a1-4f70-876c-4116e4970b68-kube-api-access-st2dt\") pod \"41b18707-50a1-4f70-876c-4116e4970b68\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.618984 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-catalog-content\") pod \"41b18707-50a1-4f70-876c-4116e4970b68\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.619971 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-utilities" (OuterVolumeSpecName: "utilities") pod "41b18707-50a1-4f70-876c-4116e4970b68" (UID: "41b18707-50a1-4f70-876c-4116e4970b68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.624860 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41b18707-50a1-4f70-876c-4116e4970b68-kube-api-access-st2dt" (OuterVolumeSpecName: "kube-api-access-st2dt") pod "41b18707-50a1-4f70-876c-4116e4970b68" (UID: "41b18707-50a1-4f70-876c-4116e4970b68"). InnerVolumeSpecName "kube-api-access-st2dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.720763 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.720794 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st2dt\" (UniqueName: \"kubernetes.io/projected/41b18707-50a1-4f70-876c-4116e4970b68-kube-api-access-st2dt\") on node \"crc\" DevicePath \"\"" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.750591 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41b18707-50a1-4f70-876c-4116e4970b68" (UID: "41b18707-50a1-4f70-876c-4116e4970b68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.823313 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.870663 4758 generic.go:334] "Generic (PLEG): container finished" podID="41b18707-50a1-4f70-876c-4116e4970b68" containerID="80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574" exitCode=0 Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.870701 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9df" event={"ID":"41b18707-50a1-4f70-876c-4116e4970b68","Type":"ContainerDied","Data":"80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574"} Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.870727 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9df" event={"ID":"41b18707-50a1-4f70-876c-4116e4970b68","Type":"ContainerDied","Data":"c74716ef804e2d84332b2c217c4e78dacce0653d9f0c63fd9aeb6ad91f9e76a9"} Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.870744 4758 scope.go:117] "RemoveContainer" containerID="80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.870870 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.898479 4758 scope.go:117] "RemoveContainer" containerID="9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.900385 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tn9df"] Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.919352 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tn9df"] Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.925545 4758 scope.go:117] "RemoveContainer" containerID="5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.963535 4758 scope.go:117] "RemoveContainer" containerID="80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574" Jan 30 09:51:17 crc kubenswrapper[4758]: E0130 09:51:17.964210 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574\": container with ID starting with 80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574 not found: ID does not exist" containerID="80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.964479 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574"} err="failed to get container status \"80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574\": rpc error: code = NotFound desc = could not find container \"80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574\": container with ID starting with 80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574 not found: ID does not exist" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.964555 4758 scope.go:117] "RemoveContainer" containerID="9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e" Jan 30 09:51:17 crc kubenswrapper[4758]: E0130 09:51:17.964856 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e\": container with ID starting with 9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e not found: ID does not exist" containerID="9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.964944 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e"} err="failed to get container status \"9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e\": rpc error: code = NotFound desc = could not find container \"9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e\": container with ID starting with 9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e not found: ID does not exist" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.965032 4758 scope.go:117] "RemoveContainer" containerID="5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21" Jan 30 09:51:17 crc kubenswrapper[4758]: E0130 09:51:17.965349 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21\": container with ID starting with 5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21 not found: ID does not exist" containerID="5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.965481 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21"} err="failed to get container status \"5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21\": rpc error: code = NotFound desc = could not find container \"5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21\": container with ID starting with 5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21 not found: ID does not exist" Jan 30 09:51:19 crc kubenswrapper[4758]: I0130 09:51:19.778977 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41b18707-50a1-4f70-876c-4116e4970b68" path="/var/lib/kubelet/pods/41b18707-50a1-4f70-876c-4116e4970b68/volumes" Jan 30 09:51:22 crc kubenswrapper[4758]: I0130 09:51:22.387473 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:51:22 crc kubenswrapper[4758]: I0130 09:51:22.387823 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:51:52 crc kubenswrapper[4758]: I0130 09:51:52.386941 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:51:52 crc kubenswrapper[4758]: I0130 09:51:52.387649 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:51:52 crc kubenswrapper[4758]: I0130 09:51:52.387709 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:51:52 crc kubenswrapper[4758]: I0130 09:51:52.388749 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:51:52 crc kubenswrapper[4758]: I0130 09:51:52.388824 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" gracePeriod=600 Jan 30 09:51:52 crc kubenswrapper[4758]: E0130 09:51:52.523568 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:51:53 crc kubenswrapper[4758]: I0130 09:51:53.193209 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" exitCode=0 Jan 30 09:51:53 crc kubenswrapper[4758]: I0130 09:51:53.193539 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3"} Jan 30 09:51:53 crc kubenswrapper[4758]: I0130 09:51:53.193650 4758 scope.go:117] "RemoveContainer" containerID="b8ffbf2a6303e1bf0dae7b91841734487c9b9b731418440f9ba74df848313204" Jan 30 09:51:53 crc kubenswrapper[4758]: I0130 09:51:53.194468 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:51:53 crc kubenswrapper[4758]: E0130 09:51:53.194823 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:52:05 crc kubenswrapper[4758]: I0130 09:52:05.781695 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:52:05 crc kubenswrapper[4758]: E0130 09:52:05.782588 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.242475 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vz4jt"] Jan 30 09:52:06 crc kubenswrapper[4758]: E0130 09:52:06.243270 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerName="extract-content" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.243293 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerName="extract-content" Jan 30 09:52:06 crc kubenswrapper[4758]: E0130 09:52:06.243312 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="extract-content" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.243320 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="extract-content" Jan 30 09:52:06 crc kubenswrapper[4758]: E0130 09:52:06.243338 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerName="extract-utilities" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.243347 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerName="extract-utilities" Jan 30 09:52:06 crc kubenswrapper[4758]: E0130 09:52:06.243367 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerName="registry-server" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.243375 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerName="registry-server" Jan 30 09:52:06 crc kubenswrapper[4758]: E0130 09:52:06.243393 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="extract-utilities" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.243402 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="extract-utilities" Jan 30 09:52:06 crc kubenswrapper[4758]: E0130 09:52:06.243425 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="registry-server" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.243432 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="registry-server" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.243684 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerName="registry-server" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.243701 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="registry-server" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.245374 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.254553 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vz4jt"] Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.333085 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-catalog-content\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.333204 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gzsq\" (UniqueName: \"kubernetes.io/projected/23b0349e-90b6-461d-9d07-2e0e5264cfc3-kube-api-access-8gzsq\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.333326 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-utilities\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.435288 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-utilities\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.435343 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-catalog-content\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.435389 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gzsq\" (UniqueName: \"kubernetes.io/projected/23b0349e-90b6-461d-9d07-2e0e5264cfc3-kube-api-access-8gzsq\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.436062 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-utilities\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.436187 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-catalog-content\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.453781 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gzsq\" (UniqueName: \"kubernetes.io/projected/23b0349e-90b6-461d-9d07-2e0e5264cfc3-kube-api-access-8gzsq\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.576193 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:07 crc kubenswrapper[4758]: I0130 09:52:07.215435 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vz4jt"] Jan 30 09:52:07 crc kubenswrapper[4758]: I0130 09:52:07.318421 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vz4jt" event={"ID":"23b0349e-90b6-461d-9d07-2e0e5264cfc3","Type":"ContainerStarted","Data":"7ccd6f77a4ed3616550ba4614117ffe2ac0adf80c68b2c2a1e2779b5b9f9ec76"} Jan 30 09:52:08 crc kubenswrapper[4758]: I0130 09:52:08.329846 4758 generic.go:334] "Generic (PLEG): container finished" podID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerID="9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323" exitCode=0 Jan 30 09:52:08 crc kubenswrapper[4758]: I0130 09:52:08.330189 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vz4jt" event={"ID":"23b0349e-90b6-461d-9d07-2e0e5264cfc3","Type":"ContainerDied","Data":"9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323"} Jan 30 09:52:10 crc kubenswrapper[4758]: I0130 09:52:10.347759 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vz4jt" event={"ID":"23b0349e-90b6-461d-9d07-2e0e5264cfc3","Type":"ContainerStarted","Data":"3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d"} Jan 30 09:52:11 crc kubenswrapper[4758]: I0130 09:52:11.359220 4758 generic.go:334] "Generic (PLEG): container finished" podID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerID="3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d" exitCode=0 Jan 30 09:52:11 crc kubenswrapper[4758]: I0130 09:52:11.359292 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vz4jt" event={"ID":"23b0349e-90b6-461d-9d07-2e0e5264cfc3","Type":"ContainerDied","Data":"3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d"} Jan 30 09:52:12 crc kubenswrapper[4758]: I0130 09:52:12.371074 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vz4jt" event={"ID":"23b0349e-90b6-461d-9d07-2e0e5264cfc3","Type":"ContainerStarted","Data":"431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9"} Jan 30 09:52:12 crc kubenswrapper[4758]: I0130 09:52:12.394619 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vz4jt" podStartSLOduration=2.884627468 podStartE2EDuration="6.394596664s" podCreationTimestamp="2026-01-30 09:52:06 +0000 UTC" firstStartedPulling="2026-01-30 09:52:08.33224567 +0000 UTC m=+4933.304557221" lastFinishedPulling="2026-01-30 09:52:11.842214876 +0000 UTC m=+4936.814526417" observedRunningTime="2026-01-30 09:52:12.393355306 +0000 UTC m=+4937.365666857" watchObservedRunningTime="2026-01-30 09:52:12.394596664 +0000 UTC m=+4937.366908215" Jan 30 09:52:16 crc kubenswrapper[4758]: I0130 09:52:16.576366 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:16 crc kubenswrapper[4758]: I0130 09:52:16.576900 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:16 crc kubenswrapper[4758]: I0130 09:52:16.623237 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:17 crc kubenswrapper[4758]: I0130 09:52:17.836744 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:17 crc kubenswrapper[4758]: I0130 09:52:17.888183 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vz4jt"] Jan 30 09:52:19 crc kubenswrapper[4758]: I0130 09:52:19.425223 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vz4jt" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerName="registry-server" containerID="cri-o://431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9" gracePeriod=2 Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.008191 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.123004 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-utilities\") pod \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.123237 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-catalog-content\") pod \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.123440 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gzsq\" (UniqueName: \"kubernetes.io/projected/23b0349e-90b6-461d-9d07-2e0e5264cfc3-kube-api-access-8gzsq\") pod \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.124789 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-utilities" (OuterVolumeSpecName: "utilities") pod "23b0349e-90b6-461d-9d07-2e0e5264cfc3" (UID: "23b0349e-90b6-461d-9d07-2e0e5264cfc3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.129997 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b0349e-90b6-461d-9d07-2e0e5264cfc3-kube-api-access-8gzsq" (OuterVolumeSpecName: "kube-api-access-8gzsq") pod "23b0349e-90b6-461d-9d07-2e0e5264cfc3" (UID: "23b0349e-90b6-461d-9d07-2e0e5264cfc3"). InnerVolumeSpecName "kube-api-access-8gzsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.172683 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23b0349e-90b6-461d-9d07-2e0e5264cfc3" (UID: "23b0349e-90b6-461d-9d07-2e0e5264cfc3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.226091 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gzsq\" (UniqueName: \"kubernetes.io/projected/23b0349e-90b6-461d-9d07-2e0e5264cfc3-kube-api-access-8gzsq\") on node \"crc\" DevicePath \"\"" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.226123 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.226136 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.436696 4758 generic.go:334] "Generic (PLEG): container finished" podID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerID="431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9" exitCode=0 Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.436737 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vz4jt" event={"ID":"23b0349e-90b6-461d-9d07-2e0e5264cfc3","Type":"ContainerDied","Data":"431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9"} Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.436764 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vz4jt" event={"ID":"23b0349e-90b6-461d-9d07-2e0e5264cfc3","Type":"ContainerDied","Data":"7ccd6f77a4ed3616550ba4614117ffe2ac0adf80c68b2c2a1e2779b5b9f9ec76"} Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.436783 4758 scope.go:117] "RemoveContainer" containerID="431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.437168 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.468068 4758 scope.go:117] "RemoveContainer" containerID="3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.480424 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vz4jt"] Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.489371 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vz4jt"] Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.521240 4758 scope.go:117] "RemoveContainer" containerID="9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.555831 4758 scope.go:117] "RemoveContainer" containerID="431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9" Jan 30 09:52:20 crc kubenswrapper[4758]: E0130 09:52:20.556434 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9\": container with ID starting with 431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9 not found: ID does not exist" containerID="431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.556487 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9"} err="failed to get container status \"431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9\": rpc error: code = NotFound desc = could not find container \"431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9\": container with ID starting with 431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9 not found: ID does not exist" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.556524 4758 scope.go:117] "RemoveContainer" containerID="3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d" Jan 30 09:52:20 crc kubenswrapper[4758]: E0130 09:52:20.556998 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d\": container with ID starting with 3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d not found: ID does not exist" containerID="3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.557058 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d"} err="failed to get container status \"3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d\": rpc error: code = NotFound desc = could not find container \"3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d\": container with ID starting with 3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d not found: ID does not exist" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.557086 4758 scope.go:117] "RemoveContainer" containerID="9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323" Jan 30 09:52:20 crc kubenswrapper[4758]: E0130 09:52:20.558606 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323\": container with ID starting with 9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323 not found: ID does not exist" containerID="9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.558651 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323"} err="failed to get container status \"9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323\": rpc error: code = NotFound desc = could not find container \"9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323\": container with ID starting with 9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323 not found: ID does not exist" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.769624 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:52:20 crc kubenswrapper[4758]: E0130 09:52:20.770317 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:52:21 crc kubenswrapper[4758]: I0130 09:52:21.779723 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" path="/var/lib/kubelet/pods/23b0349e-90b6-461d-9d07-2e0e5264cfc3/volumes" Jan 30 09:52:33 crc kubenswrapper[4758]: I0130 09:52:33.769439 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:52:33 crc kubenswrapper[4758]: E0130 09:52:33.770748 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:52:44 crc kubenswrapper[4758]: I0130 09:52:44.768439 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:52:44 crc kubenswrapper[4758]: E0130 09:52:44.769197 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:52:59 crc kubenswrapper[4758]: I0130 09:52:59.768713 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:52:59 crc kubenswrapper[4758]: E0130 09:52:59.769701 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:53:12 crc kubenswrapper[4758]: I0130 09:53:12.768957 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:53:12 crc kubenswrapper[4758]: E0130 09:53:12.771534 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:53:24 crc kubenswrapper[4758]: I0130 09:53:24.769159 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:53:24 crc kubenswrapper[4758]: E0130 09:53:24.769940 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:53:39 crc kubenswrapper[4758]: I0130 09:53:39.768955 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:53:39 crc kubenswrapper[4758]: E0130 09:53:39.770148 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:53:50 crc kubenswrapper[4758]: I0130 09:53:50.768264 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:53:50 crc kubenswrapper[4758]: E0130 09:53:50.769817 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:54:05 crc kubenswrapper[4758]: I0130 09:54:05.775510 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:54:05 crc kubenswrapper[4758]: E0130 09:54:05.776314 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:54:17 crc kubenswrapper[4758]: I0130 09:54:17.770147 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:54:17 crc kubenswrapper[4758]: E0130 09:54:17.771115 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:54:28 crc kubenswrapper[4758]: I0130 09:54:28.769013 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:54:28 crc kubenswrapper[4758]: E0130 09:54:28.769638 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:54:39 crc kubenswrapper[4758]: I0130 09:54:39.769054 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:54:39 crc kubenswrapper[4758]: E0130 09:54:39.769805 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:54:53 crc kubenswrapper[4758]: I0130 09:54:53.769241 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:54:53 crc kubenswrapper[4758]: E0130 09:54:53.770186 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:55:04 crc kubenswrapper[4758]: I0130 09:55:04.768449 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:55:04 crc kubenswrapper[4758]: E0130 09:55:04.769282 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:55:15 crc kubenswrapper[4758]: I0130 09:55:15.776616 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:55:15 crc kubenswrapper[4758]: E0130 09:55:15.777355 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:55:30 crc kubenswrapper[4758]: I0130 09:55:30.769054 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:55:30 crc kubenswrapper[4758]: E0130 09:55:30.769732 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:55:43 crc kubenswrapper[4758]: I0130 09:55:43.769934 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:55:43 crc kubenswrapper[4758]: E0130 09:55:43.770658 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:55:54 crc kubenswrapper[4758]: I0130 09:55:54.769361 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:55:54 crc kubenswrapper[4758]: E0130 09:55:54.770481 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:56:06 crc kubenswrapper[4758]: I0130 09:56:06.770155 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:56:06 crc kubenswrapper[4758]: E0130 09:56:06.771121 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.571656 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nsqjp"] Jan 30 09:56:11 crc kubenswrapper[4758]: E0130 09:56:11.572275 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerName="extract-utilities" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.572288 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerName="extract-utilities" Jan 30 09:56:11 crc kubenswrapper[4758]: E0130 09:56:11.572298 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerName="registry-server" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.572304 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerName="registry-server" Jan 30 09:56:11 crc kubenswrapper[4758]: E0130 09:56:11.572319 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerName="extract-content" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.572338 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerName="extract-content" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.572527 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerName="registry-server" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.576798 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.586700 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nsqjp"] Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.643397 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-catalog-content\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.643495 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtqx4\" (UniqueName: \"kubernetes.io/projected/eae43c60-4f82-4c9f-86cb-c453f5633105-kube-api-access-gtqx4\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.643527 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-utilities\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.745094 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtqx4\" (UniqueName: \"kubernetes.io/projected/eae43c60-4f82-4c9f-86cb-c453f5633105-kube-api-access-gtqx4\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.745154 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-utilities\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.745255 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-catalog-content\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.745695 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-utilities\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.745784 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-catalog-content\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.766233 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtqx4\" (UniqueName: \"kubernetes.io/projected/eae43c60-4f82-4c9f-86cb-c453f5633105-kube-api-access-gtqx4\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.913208 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:12 crc kubenswrapper[4758]: I0130 09:56:12.568850 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nsqjp"] Jan 30 09:56:13 crc kubenswrapper[4758]: I0130 09:56:13.475031 4758 generic.go:334] "Generic (PLEG): container finished" podID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerID="96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486" exitCode=0 Jan 30 09:56:13 crc kubenswrapper[4758]: I0130 09:56:13.475140 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsqjp" event={"ID":"eae43c60-4f82-4c9f-86cb-c453f5633105","Type":"ContainerDied","Data":"96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486"} Jan 30 09:56:13 crc kubenswrapper[4758]: I0130 09:56:13.475348 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsqjp" event={"ID":"eae43c60-4f82-4c9f-86cb-c453f5633105","Type":"ContainerStarted","Data":"31dbd8a0d2181073c437e0a25b9faf07401b3676b670f95d7a1d2cc581545505"} Jan 30 09:56:13 crc kubenswrapper[4758]: I0130 09:56:13.477902 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:56:14 crc kubenswrapper[4758]: I0130 09:56:14.484303 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsqjp" event={"ID":"eae43c60-4f82-4c9f-86cb-c453f5633105","Type":"ContainerStarted","Data":"9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade"} Jan 30 09:56:16 crc kubenswrapper[4758]: I0130 09:56:16.506386 4758 generic.go:334] "Generic (PLEG): container finished" podID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerID="9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade" exitCode=0 Jan 30 09:56:16 crc kubenswrapper[4758]: I0130 09:56:16.506529 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsqjp" event={"ID":"eae43c60-4f82-4c9f-86cb-c453f5633105","Type":"ContainerDied","Data":"9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade"} Jan 30 09:56:17 crc kubenswrapper[4758]: I0130 09:56:17.517528 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsqjp" event={"ID":"eae43c60-4f82-4c9f-86cb-c453f5633105","Type":"ContainerStarted","Data":"50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8"} Jan 30 09:56:17 crc kubenswrapper[4758]: I0130 09:56:17.539644 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nsqjp" podStartSLOduration=3.140921249 podStartE2EDuration="6.539621775s" podCreationTimestamp="2026-01-30 09:56:11 +0000 UTC" firstStartedPulling="2026-01-30 09:56:13.477263054 +0000 UTC m=+5178.449574595" lastFinishedPulling="2026-01-30 09:56:16.87596358 +0000 UTC m=+5181.848275121" observedRunningTime="2026-01-30 09:56:17.53248699 +0000 UTC m=+5182.504798571" watchObservedRunningTime="2026-01-30 09:56:17.539621775 +0000 UTC m=+5182.511933346" Jan 30 09:56:17 crc kubenswrapper[4758]: I0130 09:56:17.769130 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:56:17 crc kubenswrapper[4758]: E0130 09:56:17.769463 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:56:21 crc kubenswrapper[4758]: I0130 09:56:21.914225 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:21 crc kubenswrapper[4758]: I0130 09:56:21.914828 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:21 crc kubenswrapper[4758]: I0130 09:56:21.976515 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:22 crc kubenswrapper[4758]: I0130 09:56:22.610500 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:22 crc kubenswrapper[4758]: I0130 09:56:22.715624 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nsqjp"] Jan 30 09:56:24 crc kubenswrapper[4758]: I0130 09:56:24.582985 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nsqjp" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerName="registry-server" containerID="cri-o://50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8" gracePeriod=2 Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.155654 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.191208 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtqx4\" (UniqueName: \"kubernetes.io/projected/eae43c60-4f82-4c9f-86cb-c453f5633105-kube-api-access-gtqx4\") pod \"eae43c60-4f82-4c9f-86cb-c453f5633105\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.191279 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-utilities\") pod \"eae43c60-4f82-4c9f-86cb-c453f5633105\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.191357 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-catalog-content\") pod \"eae43c60-4f82-4c9f-86cb-c453f5633105\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.199014 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-utilities" (OuterVolumeSpecName: "utilities") pod "eae43c60-4f82-4c9f-86cb-c453f5633105" (UID: "eae43c60-4f82-4c9f-86cb-c453f5633105"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.217726 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eae43c60-4f82-4c9f-86cb-c453f5633105-kube-api-access-gtqx4" (OuterVolumeSpecName: "kube-api-access-gtqx4") pod "eae43c60-4f82-4c9f-86cb-c453f5633105" (UID: "eae43c60-4f82-4c9f-86cb-c453f5633105"). InnerVolumeSpecName "kube-api-access-gtqx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.246945 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eae43c60-4f82-4c9f-86cb-c453f5633105" (UID: "eae43c60-4f82-4c9f-86cb-c453f5633105"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.293578 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtqx4\" (UniqueName: \"kubernetes.io/projected/eae43c60-4f82-4c9f-86cb-c453f5633105-kube-api-access-gtqx4\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.293630 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.293646 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.598264 4758 generic.go:334] "Generic (PLEG): container finished" podID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerID="50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8" exitCode=0 Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.598314 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsqjp" event={"ID":"eae43c60-4f82-4c9f-86cb-c453f5633105","Type":"ContainerDied","Data":"50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8"} Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.598346 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsqjp" event={"ID":"eae43c60-4f82-4c9f-86cb-c453f5633105","Type":"ContainerDied","Data":"31dbd8a0d2181073c437e0a25b9faf07401b3676b670f95d7a1d2cc581545505"} Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.598360 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.598366 4758 scope.go:117] "RemoveContainer" containerID="50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.641077 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nsqjp"] Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.656798 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nsqjp"] Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.659238 4758 scope.go:117] "RemoveContainer" containerID="9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.692860 4758 scope.go:117] "RemoveContainer" containerID="96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.745397 4758 scope.go:117] "RemoveContainer" containerID="50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8" Jan 30 09:56:25 crc kubenswrapper[4758]: E0130 09:56:25.745896 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8\": container with ID starting with 50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8 not found: ID does not exist" containerID="50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.745940 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8"} err="failed to get container status \"50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8\": rpc error: code = NotFound desc = could not find container \"50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8\": container with ID starting with 50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8 not found: ID does not exist" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.745970 4758 scope.go:117] "RemoveContainer" containerID="9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade" Jan 30 09:56:25 crc kubenswrapper[4758]: E0130 09:56:25.746645 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade\": container with ID starting with 9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade not found: ID does not exist" containerID="9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.746674 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade"} err="failed to get container status \"9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade\": rpc error: code = NotFound desc = could not find container \"9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade\": container with ID starting with 9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade not found: ID does not exist" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.746694 4758 scope.go:117] "RemoveContainer" containerID="96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486" Jan 30 09:56:25 crc kubenswrapper[4758]: E0130 09:56:25.746950 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486\": container with ID starting with 96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486 not found: ID does not exist" containerID="96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.746978 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486"} err="failed to get container status \"96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486\": rpc error: code = NotFound desc = could not find container \"96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486\": container with ID starting with 96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486 not found: ID does not exist" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.778327 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" path="/var/lib/kubelet/pods/eae43c60-4f82-4c9f-86cb-c453f5633105/volumes" Jan 30 09:56:30 crc kubenswrapper[4758]: I0130 09:56:30.770099 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:56:30 crc kubenswrapper[4758]: E0130 09:56:30.770715 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:56:44 crc kubenswrapper[4758]: I0130 09:56:44.769079 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:56:44 crc kubenswrapper[4758]: E0130 09:56:44.769808 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:56:45 crc kubenswrapper[4758]: I0130 09:56:45.767827 4758 generic.go:334] "Generic (PLEG): container finished" podID="110e1168-332c-4165-bd6e-47419c571681" containerID="9585d72b864ad0afa161d8615e2e581e1849cfc14f137d53455eed18ce1d77db" exitCode=1 Jan 30 09:56:45 crc kubenswrapper[4758]: I0130 09:56:45.777851 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"110e1168-332c-4165-bd6e-47419c571681","Type":"ContainerDied","Data":"9585d72b864ad0afa161d8615e2e581e1849cfc14f137d53455eed18ce1d77db"} Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.188521 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.261395 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.261520 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-openstack-config-secret\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.261614 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-temporary\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.261671 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ca-certs\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.261713 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ssh-key\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.261761 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-config-data\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.261840 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-openstack-config\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.261960 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdxsh\" (UniqueName: \"kubernetes.io/projected/110e1168-332c-4165-bd6e-47419c571681-kube-api-access-kdxsh\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.262074 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-workdir\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.262289 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.262757 4758 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.268211 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-config-data" (OuterVolumeSpecName: "config-data") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.268370 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.271316 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.271325 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/110e1168-332c-4165-bd6e-47419c571681-kube-api-access-kdxsh" (OuterVolumeSpecName: "kube-api-access-kdxsh") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "kube-api-access-kdxsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.297168 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.303485 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.303550 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.320865 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.365478 4758 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.365813 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.365925 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.366016 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.366114 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdxsh\" (UniqueName: \"kubernetes.io/projected/110e1168-332c-4165-bd6e-47419c571681-kube-api-access-kdxsh\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.366184 4758 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.367982 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.368090 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.389611 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.473332 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.785474 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"110e1168-332c-4165-bd6e-47419c571681","Type":"ContainerDied","Data":"333f254addae44025e00aa879c29d6e6ca58b2430bd9e1d7e1eba46b32166b13"} Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.785522 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="333f254addae44025e00aa879c29d6e6ca58b2430bd9e1d7e1eba46b32166b13" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.785526 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.768824 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.803173 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 09:56:58 crc kubenswrapper[4758]: E0130 09:56:58.804835 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="110e1168-332c-4165-bd6e-47419c571681" containerName="tempest-tests-tempest-tests-runner" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.804856 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="110e1168-332c-4165-bd6e-47419c571681" containerName="tempest-tests-tempest-tests-runner" Jan 30 09:56:58 crc kubenswrapper[4758]: E0130 09:56:58.804881 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerName="registry-server" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.804888 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerName="registry-server" Jan 30 09:56:58 crc kubenswrapper[4758]: E0130 09:56:58.804914 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerName="extract-content" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.804921 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerName="extract-content" Jan 30 09:56:58 crc kubenswrapper[4758]: E0130 09:56:58.805184 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerName="extract-utilities" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.805199 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerName="extract-utilities" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.805378 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="110e1168-332c-4165-bd6e-47419c571681" containerName="tempest-tests-tempest-tests-runner" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.805413 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerName="registry-server" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.806780 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.813512 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bnbxz" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.826153 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.890836 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"842839ab-b48f-429c-8823-152c1606dda7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.891205 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwm6w\" (UniqueName: \"kubernetes.io/projected/842839ab-b48f-429c-8823-152c1606dda7-kube-api-access-lwm6w\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"842839ab-b48f-429c-8823-152c1606dda7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.992602 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwm6w\" (UniqueName: \"kubernetes.io/projected/842839ab-b48f-429c-8823-152c1606dda7-kube-api-access-lwm6w\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"842839ab-b48f-429c-8823-152c1606dda7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.992688 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"842839ab-b48f-429c-8823-152c1606dda7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.993780 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"842839ab-b48f-429c-8823-152c1606dda7\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:59 crc kubenswrapper[4758]: I0130 09:56:59.014656 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwm6w\" (UniqueName: \"kubernetes.io/projected/842839ab-b48f-429c-8823-152c1606dda7-kube-api-access-lwm6w\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"842839ab-b48f-429c-8823-152c1606dda7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:59 crc kubenswrapper[4758]: I0130 09:56:59.017927 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"842839ab-b48f-429c-8823-152c1606dda7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:59 crc kubenswrapper[4758]: I0130 09:56:59.208558 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:59 crc kubenswrapper[4758]: I0130 09:56:59.651030 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 09:56:59 crc kubenswrapper[4758]: I0130 09:56:59.924770 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"5ec033c17193dd2eddbcd2a6076b6c2f50aead98129c8a6395cc816bd312b9c8"} Jan 30 09:56:59 crc kubenswrapper[4758]: I0130 09:56:59.937072 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"842839ab-b48f-429c-8823-152c1606dda7","Type":"ContainerStarted","Data":"5d6cef65fe9cf1d80cd89f46f02a9e78be13b5a40cd2cb971e6d71e8015cda92"} Jan 30 09:57:00 crc kubenswrapper[4758]: I0130 09:57:00.947167 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"842839ab-b48f-429c-8823-152c1606dda7","Type":"ContainerStarted","Data":"5ed48f7837850948a1adefcf515a838afafc43d0169672c9d21985c6383146f5"} Jan 30 09:57:00 crc kubenswrapper[4758]: I0130 09:57:00.968477 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.963328133 podStartE2EDuration="2.968460069s" podCreationTimestamp="2026-01-30 09:56:58 +0000 UTC" firstStartedPulling="2026-01-30 09:56:59.659662322 +0000 UTC m=+5224.631973873" lastFinishedPulling="2026-01-30 09:57:00.664794258 +0000 UTC m=+5225.637105809" observedRunningTime="2026-01-30 09:57:00.961361457 +0000 UTC m=+5225.933673008" watchObservedRunningTime="2026-01-30 09:57:00.968460069 +0000 UTC m=+5225.940771620" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.178825 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lpqrm/must-gather-qpckg"] Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.182681 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.185182 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-lpqrm"/"default-dockercfg-5nkfs" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.185199 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lpqrm"/"openshift-service-ca.crt" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.193489 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lpqrm"/"kube-root-ca.crt" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.204025 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lpqrm/must-gather-qpckg"] Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.275450 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bec5515f-517d-441a-8d27-381128c9cbe3-must-gather-output\") pod \"must-gather-qpckg\" (UID: \"bec5515f-517d-441a-8d27-381128c9cbe3\") " pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.275759 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7d4q\" (UniqueName: \"kubernetes.io/projected/bec5515f-517d-441a-8d27-381128c9cbe3-kube-api-access-b7d4q\") pod \"must-gather-qpckg\" (UID: \"bec5515f-517d-441a-8d27-381128c9cbe3\") " pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.377345 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bec5515f-517d-441a-8d27-381128c9cbe3-must-gather-output\") pod \"must-gather-qpckg\" (UID: \"bec5515f-517d-441a-8d27-381128c9cbe3\") " pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.377407 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7d4q\" (UniqueName: \"kubernetes.io/projected/bec5515f-517d-441a-8d27-381128c9cbe3-kube-api-access-b7d4q\") pod \"must-gather-qpckg\" (UID: \"bec5515f-517d-441a-8d27-381128c9cbe3\") " pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.377838 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bec5515f-517d-441a-8d27-381128c9cbe3-must-gather-output\") pod \"must-gather-qpckg\" (UID: \"bec5515f-517d-441a-8d27-381128c9cbe3\") " pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.396953 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7d4q\" (UniqueName: \"kubernetes.io/projected/bec5515f-517d-441a-8d27-381128c9cbe3-kube-api-access-b7d4q\") pod \"must-gather-qpckg\" (UID: \"bec5515f-517d-441a-8d27-381128c9cbe3\") " pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.500453 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.983218 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lpqrm/must-gather-qpckg"] Jan 30 09:57:30 crc kubenswrapper[4758]: I0130 09:57:30.186797 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/must-gather-qpckg" event={"ID":"bec5515f-517d-441a-8d27-381128c9cbe3","Type":"ContainerStarted","Data":"89ae98253414ceb39dcf218749a0f366f9233ed8dc3795138c34830143d35d76"} Jan 30 09:57:38 crc kubenswrapper[4758]: I0130 09:57:38.292758 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/must-gather-qpckg" event={"ID":"bec5515f-517d-441a-8d27-381128c9cbe3","Type":"ContainerStarted","Data":"5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec"} Jan 30 09:57:38 crc kubenswrapper[4758]: I0130 09:57:38.293590 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/must-gather-qpckg" event={"ID":"bec5515f-517d-441a-8d27-381128c9cbe3","Type":"ContainerStarted","Data":"a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee"} Jan 30 09:57:38 crc kubenswrapper[4758]: I0130 09:57:38.313510 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lpqrm/must-gather-qpckg" podStartSLOduration=1.5552651979999998 podStartE2EDuration="9.31348545s" podCreationTimestamp="2026-01-30 09:57:29 +0000 UTC" firstStartedPulling="2026-01-30 09:57:29.996479491 +0000 UTC m=+5254.968791042" lastFinishedPulling="2026-01-30 09:57:37.754699743 +0000 UTC m=+5262.727011294" observedRunningTime="2026-01-30 09:57:38.309329659 +0000 UTC m=+5263.281641240" watchObservedRunningTime="2026-01-30 09:57:38.31348545 +0000 UTC m=+5263.285797021" Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.635923 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-8g2cp"] Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.638578 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.663323 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-host\") pod \"crc-debug-8g2cp\" (UID: \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\") " pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.663597 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz7dv\" (UniqueName: \"kubernetes.io/projected/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-kube-api-access-dz7dv\") pod \"crc-debug-8g2cp\" (UID: \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\") " pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.765458 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz7dv\" (UniqueName: \"kubernetes.io/projected/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-kube-api-access-dz7dv\") pod \"crc-debug-8g2cp\" (UID: \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\") " pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.765926 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-host\") pod \"crc-debug-8g2cp\" (UID: \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\") " pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.766142 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-host\") pod \"crc-debug-8g2cp\" (UID: \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\") " pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.826085 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz7dv\" (UniqueName: \"kubernetes.io/projected/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-kube-api-access-dz7dv\") pod \"crc-debug-8g2cp\" (UID: \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\") " pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.960687 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:57:44 crc kubenswrapper[4758]: I0130 09:57:44.352147 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" event={"ID":"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e","Type":"ContainerStarted","Data":"a0af2912b7e8110758c70c798dcab6453fd6a32667c5d9a104395c99842f600d"} Jan 30 09:57:55 crc kubenswrapper[4758]: I0130 09:57:55.446541 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" event={"ID":"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e","Type":"ContainerStarted","Data":"5ec49f03ce30f39c620b596bdc2dedc28bf6a7cfab662959f3b4adf09c9a537d"} Jan 30 09:57:55 crc kubenswrapper[4758]: I0130 09:57:55.475562 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" podStartSLOduration=2.102101914 podStartE2EDuration="12.475542999s" podCreationTimestamp="2026-01-30 09:57:43 +0000 UTC" firstStartedPulling="2026-01-30 09:57:44.011917713 +0000 UTC m=+5268.984229264" lastFinishedPulling="2026-01-30 09:57:54.385358798 +0000 UTC m=+5279.357670349" observedRunningTime="2026-01-30 09:57:55.472610098 +0000 UTC m=+5280.444921649" watchObservedRunningTime="2026-01-30 09:57:55.475542999 +0000 UTC m=+5280.447854550" Jan 30 09:58:50 crc kubenswrapper[4758]: I0130 09:58:50.910659 4758 generic.go:334] "Generic (PLEG): container finished" podID="41c6b24c-ec14-4379-9101-cd5ea6d2ab2e" containerID="5ec49f03ce30f39c620b596bdc2dedc28bf6a7cfab662959f3b4adf09c9a537d" exitCode=0 Jan 30 09:58:50 crc kubenswrapper[4758]: I0130 09:58:50.910748 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" event={"ID":"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e","Type":"ContainerDied","Data":"5ec49f03ce30f39c620b596bdc2dedc28bf6a7cfab662959f3b4adf09c9a537d"} Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.011573 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.050857 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-8g2cp"] Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.059528 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-8g2cp"] Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.136548 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz7dv\" (UniqueName: \"kubernetes.io/projected/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-kube-api-access-dz7dv\") pod \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\" (UID: \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\") " Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.136624 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-host\") pod \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\" (UID: \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\") " Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.137006 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-host" (OuterVolumeSpecName: "host") pod "41c6b24c-ec14-4379-9101-cd5ea6d2ab2e" (UID: "41c6b24c-ec14-4379-9101-cd5ea6d2ab2e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.137369 4758 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-host\") on node \"crc\" DevicePath \"\"" Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.152669 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-kube-api-access-dz7dv" (OuterVolumeSpecName: "kube-api-access-dz7dv") pod "41c6b24c-ec14-4379-9101-cd5ea6d2ab2e" (UID: "41c6b24c-ec14-4379-9101-cd5ea6d2ab2e"). InnerVolumeSpecName "kube-api-access-dz7dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.239086 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz7dv\" (UniqueName: \"kubernetes.io/projected/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-kube-api-access-dz7dv\") on node \"crc\" DevicePath \"\"" Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.941153 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0af2912b7e8110758c70c798dcab6453fd6a32667c5d9a104395c99842f600d" Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.941472 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.296698 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-jdz8q"] Jan 30 09:58:53 crc kubenswrapper[4758]: E0130 09:58:53.297082 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c6b24c-ec14-4379-9101-cd5ea6d2ab2e" containerName="container-00" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.297096 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c6b24c-ec14-4379-9101-cd5ea6d2ab2e" containerName="container-00" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.297305 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c6b24c-ec14-4379-9101-cd5ea6d2ab2e" containerName="container-00" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.297924 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.479598 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxk2f\" (UniqueName: \"kubernetes.io/projected/a579bed6-89ad-42fe-9633-2f0c7f34d9de-kube-api-access-cxk2f\") pod \"crc-debug-jdz8q\" (UID: \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\") " pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.479708 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a579bed6-89ad-42fe-9633-2f0c7f34d9de-host\") pod \"crc-debug-jdz8q\" (UID: \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\") " pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.581589 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxk2f\" (UniqueName: \"kubernetes.io/projected/a579bed6-89ad-42fe-9633-2f0c7f34d9de-kube-api-access-cxk2f\") pod \"crc-debug-jdz8q\" (UID: \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\") " pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.581687 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a579bed6-89ad-42fe-9633-2f0c7f34d9de-host\") pod \"crc-debug-jdz8q\" (UID: \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\") " pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.581925 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a579bed6-89ad-42fe-9633-2f0c7f34d9de-host\") pod \"crc-debug-jdz8q\" (UID: \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\") " pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.619600 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxk2f\" (UniqueName: \"kubernetes.io/projected/a579bed6-89ad-42fe-9633-2f0c7f34d9de-kube-api-access-cxk2f\") pod \"crc-debug-jdz8q\" (UID: \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\") " pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.625324 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.781117 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c6b24c-ec14-4379-9101-cd5ea6d2ab2e" path="/var/lib/kubelet/pods/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e/volumes" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.952165 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" event={"ID":"a579bed6-89ad-42fe-9633-2f0c7f34d9de","Type":"ContainerStarted","Data":"d5124596fab9753fd484ec6efd38bf02b9c0c9ce543d442c2f18b786111542c3"} Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.952222 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" event={"ID":"a579bed6-89ad-42fe-9633-2f0c7f34d9de","Type":"ContainerStarted","Data":"e056a91092edf01dbe7658a48842b4f58bccb927ace5f17a092ad55a2ff57214"} Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.979330 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" podStartSLOduration=0.979311103 podStartE2EDuration="979.311103ms" podCreationTimestamp="2026-01-30 09:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 09:58:53.969375911 +0000 UTC m=+5338.941687482" watchObservedRunningTime="2026-01-30 09:58:53.979311103 +0000 UTC m=+5338.951622654" Jan 30 09:58:54 crc kubenswrapper[4758]: I0130 09:58:54.960762 4758 generic.go:334] "Generic (PLEG): container finished" podID="a579bed6-89ad-42fe-9633-2f0c7f34d9de" containerID="d5124596fab9753fd484ec6efd38bf02b9c0c9ce543d442c2f18b786111542c3" exitCode=0 Jan 30 09:58:54 crc kubenswrapper[4758]: I0130 09:58:54.960798 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" event={"ID":"a579bed6-89ad-42fe-9633-2f0c7f34d9de","Type":"ContainerDied","Data":"d5124596fab9753fd484ec6efd38bf02b9c0c9ce543d442c2f18b786111542c3"} Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.072050 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.102815 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-jdz8q"] Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.111673 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-jdz8q"] Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.237340 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxk2f\" (UniqueName: \"kubernetes.io/projected/a579bed6-89ad-42fe-9633-2f0c7f34d9de-kube-api-access-cxk2f\") pod \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\" (UID: \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\") " Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.237486 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a579bed6-89ad-42fe-9633-2f0c7f34d9de-host\") pod \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\" (UID: \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\") " Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.237955 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a579bed6-89ad-42fe-9633-2f0c7f34d9de-host" (OuterVolumeSpecName: "host") pod "a579bed6-89ad-42fe-9633-2f0c7f34d9de" (UID: "a579bed6-89ad-42fe-9633-2f0c7f34d9de"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.243517 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a579bed6-89ad-42fe-9633-2f0c7f34d9de-kube-api-access-cxk2f" (OuterVolumeSpecName: "kube-api-access-cxk2f") pod "a579bed6-89ad-42fe-9633-2f0c7f34d9de" (UID: "a579bed6-89ad-42fe-9633-2f0c7f34d9de"). InnerVolumeSpecName "kube-api-access-cxk2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.339827 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxk2f\" (UniqueName: \"kubernetes.io/projected/a579bed6-89ad-42fe-9633-2f0c7f34d9de-kube-api-access-cxk2f\") on node \"crc\" DevicePath \"\"" Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.340135 4758 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a579bed6-89ad-42fe-9633-2f0c7f34d9de-host\") on node \"crc\" DevicePath \"\"" Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.981114 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e056a91092edf01dbe7658a48842b4f58bccb927ace5f17a092ad55a2ff57214" Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.981184 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.319329 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-kp2t6"] Jan 30 09:58:57 crc kubenswrapper[4758]: E0130 09:58:57.319782 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a579bed6-89ad-42fe-9633-2f0c7f34d9de" containerName="container-00" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.319799 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a579bed6-89ad-42fe-9633-2f0c7f34d9de" containerName="container-00" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.320026 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a579bed6-89ad-42fe-9633-2f0c7f34d9de" containerName="container-00" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.320791 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.467303 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llcpf\" (UniqueName: \"kubernetes.io/projected/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-kube-api-access-llcpf\") pod \"crc-debug-kp2t6\" (UID: \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\") " pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.467696 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-host\") pod \"crc-debug-kp2t6\" (UID: \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\") " pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.569542 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-host\") pod \"crc-debug-kp2t6\" (UID: \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\") " pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.570133 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llcpf\" (UniqueName: \"kubernetes.io/projected/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-kube-api-access-llcpf\") pod \"crc-debug-kp2t6\" (UID: \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\") " pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.570274 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-host\") pod \"crc-debug-kp2t6\" (UID: \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\") " pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.599221 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llcpf\" (UniqueName: \"kubernetes.io/projected/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-kube-api-access-llcpf\") pod \"crc-debug-kp2t6\" (UID: \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\") " pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.636662 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:57 crc kubenswrapper[4758]: W0130 09:58:57.673226 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f9e7233_29a7_4dbe_a96c_d3bf9abd7077.slice/crio-31984a9dbbcee34344fd17ae59d8e1e21a072bb43e418b4d7aa5e42bc7329326 WatchSource:0}: Error finding container 31984a9dbbcee34344fd17ae59d8e1e21a072bb43e418b4d7aa5e42bc7329326: Status 404 returned error can't find the container with id 31984a9dbbcee34344fd17ae59d8e1e21a072bb43e418b4d7aa5e42bc7329326 Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.783604 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a579bed6-89ad-42fe-9633-2f0c7f34d9de" path="/var/lib/kubelet/pods/a579bed6-89ad-42fe-9633-2f0c7f34d9de/volumes" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.991403 4758 generic.go:334] "Generic (PLEG): container finished" podID="5f9e7233-29a7-4dbe-a96c-d3bf9abd7077" containerID="cb2be7757629008fc0336fea0781bdd9ab81c5796a2894a5f33c24f797829704" exitCode=0 Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.991746 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" event={"ID":"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077","Type":"ContainerDied","Data":"cb2be7757629008fc0336fea0781bdd9ab81c5796a2894a5f33c24f797829704"} Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.991777 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" event={"ID":"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077","Type":"ContainerStarted","Data":"31984a9dbbcee34344fd17ae59d8e1e21a072bb43e418b4d7aa5e42bc7329326"} Jan 30 09:58:58 crc kubenswrapper[4758]: I0130 09:58:58.252196 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-kp2t6"] Jan 30 09:58:58 crc kubenswrapper[4758]: I0130 09:58:58.262736 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-kp2t6"] Jan 30 09:58:59 crc kubenswrapper[4758]: I0130 09:58:59.108977 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:59 crc kubenswrapper[4758]: I0130 09:58:59.300528 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-host\") pod \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\" (UID: \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\") " Jan 30 09:58:59 crc kubenswrapper[4758]: I0130 09:58:59.300939 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llcpf\" (UniqueName: \"kubernetes.io/projected/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-kube-api-access-llcpf\") pod \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\" (UID: \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\") " Jan 30 09:58:59 crc kubenswrapper[4758]: I0130 09:58:59.300712 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-host" (OuterVolumeSpecName: "host") pod "5f9e7233-29a7-4dbe-a96c-d3bf9abd7077" (UID: "5f9e7233-29a7-4dbe-a96c-d3bf9abd7077"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 09:58:59 crc kubenswrapper[4758]: I0130 09:58:59.301833 4758 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-host\") on node \"crc\" DevicePath \"\"" Jan 30 09:58:59 crc kubenswrapper[4758]: I0130 09:58:59.308552 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-kube-api-access-llcpf" (OuterVolumeSpecName: "kube-api-access-llcpf") pod "5f9e7233-29a7-4dbe-a96c-d3bf9abd7077" (UID: "5f9e7233-29a7-4dbe-a96c-d3bf9abd7077"). InnerVolumeSpecName "kube-api-access-llcpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:58:59 crc kubenswrapper[4758]: I0130 09:58:59.403513 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llcpf\" (UniqueName: \"kubernetes.io/projected/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-kube-api-access-llcpf\") on node \"crc\" DevicePath \"\"" Jan 30 09:58:59 crc kubenswrapper[4758]: I0130 09:58:59.777248 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f9e7233-29a7-4dbe-a96c-d3bf9abd7077" path="/var/lib/kubelet/pods/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077/volumes" Jan 30 09:59:00 crc kubenswrapper[4758]: I0130 09:59:00.010103 4758 scope.go:117] "RemoveContainer" containerID="cb2be7757629008fc0336fea0781bdd9ab81c5796a2894a5f33c24f797829704" Jan 30 09:59:00 crc kubenswrapper[4758]: I0130 09:59:00.010265 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:59:22 crc kubenswrapper[4758]: I0130 09:59:22.387561 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:59:22 crc kubenswrapper[4758]: I0130 09:59:22.388143 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:59:26 crc kubenswrapper[4758]: I0130 09:59:26.423519 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-856f46cdd-mkt57_e33c3e33-3106-483e-bdba-400a2911ff27/barbican-api/0.log" Jan 30 09:59:26 crc kubenswrapper[4758]: I0130 09:59:26.625490 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-856f46cdd-mkt57_e33c3e33-3106-483e-bdba-400a2911ff27/barbican-api-log/0.log" Jan 30 09:59:26 crc kubenswrapper[4758]: I0130 09:59:26.706674 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5b64f54b54-68xdf_dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f/barbican-keystone-listener/0.log" Jan 30 09:59:26 crc kubenswrapper[4758]: I0130 09:59:26.910931 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5b64f54b54-68xdf_dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f/barbican-keystone-listener-log/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.014184 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9c5c6c655-zfgmj_db351db3-71c9-4b03-98b9-68da68f45f14/barbican-worker/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.080785 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9c5c6c655-zfgmj_db351db3-71c9-4b03-98b9-68da68f45f14/barbican-worker-log/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.280354 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp_64e0d966-7ff9-4dd8-97c0-660cde10793b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.518958 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dc56c5be-c70d-44a7-8914-cf2e598f3333/ceilometer-notification-agent/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.538578 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dc56c5be-c70d-44a7-8914-cf2e598f3333/ceilometer-central-agent/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.645890 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dc56c5be-c70d-44a7-8914-cf2e598f3333/proxy-httpd/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.702341 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dc56c5be-c70d-44a7-8914-cf2e598f3333/sg-core/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.885339 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d6d3f5e9-e330-476b-be63-775114f987e6/cinder-api/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.949335 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d6d3f5e9-e330-476b-be63-775114f987e6/cinder-api-log/0.log" Jan 30 09:59:28 crc kubenswrapper[4758]: I0130 09:59:28.419109 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_fbd50144-fe99-468f-a32b-172996d95ca1/cinder-scheduler/0.log" Jan 30 09:59:28 crc kubenswrapper[4758]: I0130 09:59:28.576773 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_fbd50144-fe99-468f-a32b-172996d95ca1/probe/0.log" Jan 30 09:59:28 crc kubenswrapper[4758]: I0130 09:59:28.620667 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn_a282c0aa-8c3d-4a78-9fd6-1971701a1158/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:28 crc kubenswrapper[4758]: I0130 09:59:28.857289 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-95b4x_b5727310-9e0b-40f5-ae4e-209ed7d3ee36/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:28 crc kubenswrapper[4758]: I0130 09:59:28.932727 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-549cc57c95-cpkk5_cdf54085-1dd0-4eb1-9640-e75c69be5a44/init/0.log" Jan 30 09:59:29 crc kubenswrapper[4758]: I0130 09:59:29.184122 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-549cc57c95-cpkk5_cdf54085-1dd0-4eb1-9640-e75c69be5a44/init/0.log" Jan 30 09:59:29 crc kubenswrapper[4758]: I0130 09:59:29.323006 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w_103d82ed-724d-4545-9b1c-04633d68c1ef/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:29 crc kubenswrapper[4758]: I0130 09:59:29.371934 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-549cc57c95-cpkk5_cdf54085-1dd0-4eb1-9640-e75c69be5a44/dnsmasq-dns/0.log" Jan 30 09:59:29 crc kubenswrapper[4758]: I0130 09:59:29.573628 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9e6a95ad-6f31-4494-9caf-5eea1c43e005/glance-log/0.log" Jan 30 09:59:29 crc kubenswrapper[4758]: I0130 09:59:29.584587 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9e6a95ad-6f31-4494-9caf-5eea1c43e005/glance-httpd/0.log" Jan 30 09:59:29 crc kubenswrapper[4758]: I0130 09:59:29.810131 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7fa932ed-7bd7-4827-a24f-e29c15c9b563/glance-log/0.log" Jan 30 09:59:29 crc kubenswrapper[4758]: I0130 09:59:29.917406 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7fa932ed-7bd7-4827-a24f-e29c15c9b563/glance-httpd/0.log" Jan 30 09:59:30 crc kubenswrapper[4758]: I0130 09:59:30.078360 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5cf698bb7b-gp87v_97906db2-3b2d-44ec-af77-d3edf75b7f76/horizon/2.log" Jan 30 09:59:30 crc kubenswrapper[4758]: I0130 09:59:30.264719 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5cf698bb7b-gp87v_97906db2-3b2d-44ec-af77-d3edf75b7f76/horizon/1.log" Jan 30 09:59:30 crc kubenswrapper[4758]: I0130 09:59:30.507687 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-4rccq_836593cc-2b98-4f54-8407-6d92687559f5/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:30 crc kubenswrapper[4758]: I0130 09:59:30.673864 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5cf698bb7b-gp87v_97906db2-3b2d-44ec-af77-d3edf75b7f76/horizon-log/0.log" Jan 30 09:59:30 crc kubenswrapper[4758]: I0130 09:59:30.707675 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-rtws8_39adf6a6-10cd-412d-aca4-3c68ddcf8887/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:31 crc kubenswrapper[4758]: I0130 09:59:31.101805 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496061-prvzt_c5ff7966-87c8-4b9b-8520-a05c3b5d252d/keystone-cron/0.log" Jan 30 09:59:31 crc kubenswrapper[4758]: I0130 09:59:31.327351 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_402b7d3e-d66f-412a-a3a8-4c45a9a47628/kube-state-metrics/0.log" Jan 30 09:59:31 crc kubenswrapper[4758]: I0130 09:59:31.484867 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-546cd7df57-wnwgz_283998cc-90b9-49fb-91f5-7cfd514603d0/keystone-api/0.log" Jan 30 09:59:31 crc kubenswrapper[4758]: I0130 09:59:31.826808 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv_35efb0cc-1bf4-4052-af18-b206ea052f80/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:32 crc kubenswrapper[4758]: I0130 09:59:32.530723 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr_e377cd96-b016-4154-bae7-fa61f9be7472/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:32 crc kubenswrapper[4758]: I0130 09:59:32.691545 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6f9c8c6ff5-f2sb7_0bacb926-f58c-4c06-870a-633b7a3795c5/neutron-httpd/0.log" Jan 30 09:59:32 crc kubenswrapper[4758]: I0130 09:59:32.973106 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6f9c8c6ff5-f2sb7_0bacb926-f58c-4c06-870a-633b7a3795c5/neutron-api/0.log" Jan 30 09:59:33 crc kubenswrapper[4758]: I0130 09:59:33.637967 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d5c8d4f1-2007-458e-a918-35eea3933622/nova-cell0-conductor-conductor/0.log" Jan 30 09:59:33 crc kubenswrapper[4758]: I0130 09:59:33.830221 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_9de33ead-37a1-4675-bad7-79b672a0c954/nova-cell1-conductor-conductor/0.log" Jan 30 09:59:34 crc kubenswrapper[4758]: I0130 09:59:34.306612 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6c7b05a5-3faf-4e02-9bb5-f79a4745f073/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 09:59:34 crc kubenswrapper[4758]: I0130 09:59:34.732649 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-vfn7z_48c9d5d6-6dc5-4848-bade-3c302106b074/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:35 crc kubenswrapper[4758]: I0130 09:59:35.233298 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fd2d2fe7-5dac-4f3b-80f6-650712925495/nova-api-log/0.log" Jan 30 09:59:35 crc kubenswrapper[4758]: I0130 09:59:35.378632 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e4213c10-dde9-4a4d-9af9-304dd08f755c/nova-metadata-log/0.log" Jan 30 09:59:35 crc kubenswrapper[4758]: I0130 09:59:35.727504 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fd2d2fe7-5dac-4f3b-80f6-650712925495/nova-api-api/0.log" Jan 30 09:59:36 crc kubenswrapper[4758]: I0130 09:59:36.015932 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0a15517a-ff48-40d1-91b4-442bfef91fc1/mysql-bootstrap/0.log" Jan 30 09:59:36 crc kubenswrapper[4758]: I0130 09:59:36.129481 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4697d282-598e-4faa-ae13-6ba6d3747bf0/nova-scheduler-scheduler/0.log" Jan 30 09:59:36 crc kubenswrapper[4758]: I0130 09:59:36.265291 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0a15517a-ff48-40d1-91b4-442bfef91fc1/galera/0.log" Jan 30 09:59:36 crc kubenswrapper[4758]: I0130 09:59:36.350979 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0a15517a-ff48-40d1-91b4-442bfef91fc1/mysql-bootstrap/0.log" Jan 30 09:59:36 crc kubenswrapper[4758]: I0130 09:59:36.698908 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1787d8b1-5b19-41e5-a66d-8375f9d5bb3f/mysql-bootstrap/0.log" Jan 30 09:59:36 crc kubenswrapper[4758]: I0130 09:59:36.841212 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1787d8b1-5b19-41e5-a66d-8375f9d5bb3f/mysql-bootstrap/0.log" Jan 30 09:59:36 crc kubenswrapper[4758]: I0130 09:59:36.947868 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1787d8b1-5b19-41e5-a66d-8375f9d5bb3f/galera/0.log" Jan 30 09:59:37 crc kubenswrapper[4758]: I0130 09:59:37.145365 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_e81b8de8-1714-4a5d-852a-e61d4bc9cd5d/openstackclient/0.log" Jan 30 09:59:37 crc kubenswrapper[4758]: I0130 09:59:37.344514 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-bg2b8_78294966-2fbd-4ed5-8d2a-2096ac07dac1/ovn-controller/0.log" Jan 30 09:59:37 crc kubenswrapper[4758]: I0130 09:59:37.496735 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-w9zqv_1c89386d-e6bb-45b3-bd95-970270275127/openstack-network-exporter/0.log" Jan 30 09:59:37 crc kubenswrapper[4758]: I0130 09:59:37.563771 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e4213c10-dde9-4a4d-9af9-304dd08f755c/nova-metadata-metadata/0.log" Jan 30 09:59:37 crc kubenswrapper[4758]: I0130 09:59:37.812888 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9jzfn_86f64629-c944-4783-8012-7cea45690009/ovsdb-server-init/0.log" Jan 30 09:59:37 crc kubenswrapper[4758]: I0130 09:59:37.965287 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9jzfn_86f64629-c944-4783-8012-7cea45690009/ovsdb-server-init/0.log" Jan 30 09:59:37 crc kubenswrapper[4758]: I0130 09:59:37.970613 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9jzfn_86f64629-c944-4783-8012-7cea45690009/ovs-vswitchd/0.log" Jan 30 09:59:38 crc kubenswrapper[4758]: I0130 09:59:38.092477 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9jzfn_86f64629-c944-4783-8012-7cea45690009/ovsdb-server/0.log" Jan 30 09:59:38 crc kubenswrapper[4758]: I0130 09:59:38.273220 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-fv46l_9db7f310-f803-4981-8a55-5d45e9015488/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:38 crc kubenswrapper[4758]: I0130 09:59:38.344364 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_831717ab-1273-408c-9fdf-4cd5bd2d2bb9/openstack-network-exporter/0.log" Jan 30 09:59:38 crc kubenswrapper[4758]: I0130 09:59:38.466912 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_831717ab-1273-408c-9fdf-4cd5bd2d2bb9/ovn-northd/0.log" Jan 30 09:59:38 crc kubenswrapper[4758]: I0130 09:59:38.615912 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a7ba2509-bffc-4639-9b6f-188e2a194b7a/openstack-network-exporter/0.log" Jan 30 09:59:38 crc kubenswrapper[4758]: I0130 09:59:38.672739 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a7ba2509-bffc-4639-9b6f-188e2a194b7a/ovsdbserver-nb/0.log" Jan 30 09:59:39 crc kubenswrapper[4758]: I0130 09:59:39.244702 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0817ec2c-1c6d-4c1b-a019-3f2579ade18a/openstack-network-exporter/0.log" Jan 30 09:59:39 crc kubenswrapper[4758]: I0130 09:59:39.372231 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0817ec2c-1c6d-4c1b-a019-3f2579ade18a/ovsdbserver-sb/0.log" Jan 30 09:59:39 crc kubenswrapper[4758]: I0130 09:59:39.525077 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d/memcached/0.log" Jan 30 09:59:39 crc kubenswrapper[4758]: I0130 09:59:39.763135 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-75649bd464-bvxps_ca89048c-91af-4732-8ef8-24da4618ccf9/placement-api/0.log" Jan 30 09:59:39 crc kubenswrapper[4758]: I0130 09:59:39.769314 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ec0e1aed-0ac5-4482-906f-89c9243729ea/setup-container/0.log" Jan 30 09:59:39 crc kubenswrapper[4758]: I0130 09:59:39.839307 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-75649bd464-bvxps_ca89048c-91af-4732-8ef8-24da4618ccf9/placement-log/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.001877 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ec0e1aed-0ac5-4482-906f-89c9243729ea/setup-container/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.048293 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c72311ae-5d7e-4978-a690-a9bee0b3672b/setup-container/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.101666 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ec0e1aed-0ac5-4482-906f-89c9243729ea/rabbitmq/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.303626 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn_49ef08a8-e5d8-4a62-bc4d-227948c7fa12/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.371598 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c72311ae-5d7e-4978-a690-a9bee0b3672b/setup-container/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.396310 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c72311ae-5d7e-4978-a690-a9bee0b3672b/rabbitmq/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.801447 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-rr84b_a7d607da-5923-4ef5-82ba-083df5db3864/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.802374 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c_350797d0-7758-4c0d-84cb-ec160451f377/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.805867 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-5nm8t_2b0522c4-4143-4ac3-b5c7-ea6c073dfc38/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.132738 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-wlb6p_1135df14-5d2f-47ff-9038-fce2addc71d0/ssh-known-hosts-edpm-deployment/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.256234 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-75f5775999-fhl5h_c2358e5c-db98-4b7b-8b6c-2e83132655a9/proxy-server/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.347585 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-75f5775999-fhl5h_c2358e5c-db98-4b7b-8b6c-2e83132655a9/proxy-httpd/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.460234 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/account-auditor/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.476330 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-ws6z7_0a36303a-ea35-4a12-be39-906481ea247a/swift-ring-rebalance/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.558391 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/account-reaper/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.649348 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/account-replicator/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.731547 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/container-auditor/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.748731 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/account-server/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.795421 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/container-replicator/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.857477 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/container-server/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.934195 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/container-updater/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.971558 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/object-expirer/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.040496 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/object-auditor/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.080751 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/object-replicator/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.172525 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/object-updater/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.181437 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/object-server/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.289117 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/swift-recon-cron/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.301405 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/rsync/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.541202 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn_38932896-a566-4440-b672-33909cb638b0/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.611635 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_110e1168-332c-4165-bd6e-47419c571681/tempest-tests-tempest-tests-runner/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.740874 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_842839ab-b48f-429c-8823-152c1606dda7/test-operator-logs-container/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.847427 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-55r6r_21db3b55-b11b-4ca5-a2d0-676ec4e6fb83/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:52 crc kubenswrapper[4758]: I0130 09:59:52.387472 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:59:52 crc kubenswrapper[4758]: I0130 09:59:52.387904 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.145411 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc"] Jan 30 10:00:00 crc kubenswrapper[4758]: E0130 10:00:00.146312 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f9e7233-29a7-4dbe-a96c-d3bf9abd7077" containerName="container-00" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.146324 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f9e7233-29a7-4dbe-a96c-d3bf9abd7077" containerName="container-00" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.146493 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f9e7233-29a7-4dbe-a96c-d3bf9abd7077" containerName="container-00" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.147115 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.150471 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.150765 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.169051 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc"] Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.179374 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkb5x\" (UniqueName: \"kubernetes.io/projected/b6e6a092-7ac9-466c-9d70-f324f2908447-kube-api-access-gkb5x\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.179448 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6e6a092-7ac9-466c-9d70-f324f2908447-config-volume\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.179567 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6e6a092-7ac9-466c-9d70-f324f2908447-secret-volume\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.280915 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkb5x\" (UniqueName: \"kubernetes.io/projected/b6e6a092-7ac9-466c-9d70-f324f2908447-kube-api-access-gkb5x\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.280995 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6e6a092-7ac9-466c-9d70-f324f2908447-config-volume\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.281054 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6e6a092-7ac9-466c-9d70-f324f2908447-secret-volume\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.282271 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6e6a092-7ac9-466c-9d70-f324f2908447-config-volume\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.290802 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6e6a092-7ac9-466c-9d70-f324f2908447-secret-volume\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.299408 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkb5x\" (UniqueName: \"kubernetes.io/projected/b6e6a092-7ac9-466c-9d70-f324f2908447-kube-api-access-gkb5x\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.464549 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.958455 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc"] Jan 30 10:00:01 crc kubenswrapper[4758]: I0130 10:00:01.590472 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" event={"ID":"b6e6a092-7ac9-466c-9d70-f324f2908447","Type":"ContainerStarted","Data":"da2f39fd9f27fcc1481cbfd2a939f7a131858726c2f9923b0278f5da04987da1"} Jan 30 10:00:01 crc kubenswrapper[4758]: I0130 10:00:01.590811 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" event={"ID":"b6e6a092-7ac9-466c-9d70-f324f2908447","Type":"ContainerStarted","Data":"819a63cf33834f09384ea6d3cc1d1229b348ccabb1c99d2a3f50d83a15015a2a"} Jan 30 10:00:01 crc kubenswrapper[4758]: I0130 10:00:01.618537 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" podStartSLOduration=1.6185187189999999 podStartE2EDuration="1.618518719s" podCreationTimestamp="2026-01-30 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:00:01.610372122 +0000 UTC m=+5406.582683663" watchObservedRunningTime="2026-01-30 10:00:01.618518719 +0000 UTC m=+5406.590830260" Jan 30 10:00:02 crc kubenswrapper[4758]: I0130 10:00:02.599338 4758 generic.go:334] "Generic (PLEG): container finished" podID="b6e6a092-7ac9-466c-9d70-f324f2908447" containerID="da2f39fd9f27fcc1481cbfd2a939f7a131858726c2f9923b0278f5da04987da1" exitCode=0 Jan 30 10:00:02 crc kubenswrapper[4758]: I0130 10:00:02.599377 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" event={"ID":"b6e6a092-7ac9-466c-9d70-f324f2908447","Type":"ContainerDied","Data":"da2f39fd9f27fcc1481cbfd2a939f7a131858726c2f9923b0278f5da04987da1"} Jan 30 10:00:03 crc kubenswrapper[4758]: I0130 10:00:03.961974 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.047160 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6e6a092-7ac9-466c-9d70-f324f2908447-secret-volume\") pod \"b6e6a092-7ac9-466c-9d70-f324f2908447\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.047506 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6e6a092-7ac9-466c-9d70-f324f2908447-config-volume\") pod \"b6e6a092-7ac9-466c-9d70-f324f2908447\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.047704 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkb5x\" (UniqueName: \"kubernetes.io/projected/b6e6a092-7ac9-466c-9d70-f324f2908447-kube-api-access-gkb5x\") pod \"b6e6a092-7ac9-466c-9d70-f324f2908447\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.048492 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6e6a092-7ac9-466c-9d70-f324f2908447-config-volume" (OuterVolumeSpecName: "config-volume") pod "b6e6a092-7ac9-466c-9d70-f324f2908447" (UID: "b6e6a092-7ac9-466c-9d70-f324f2908447"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.056227 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e6a092-7ac9-466c-9d70-f324f2908447-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b6e6a092-7ac9-466c-9d70-f324f2908447" (UID: "b6e6a092-7ac9-466c-9d70-f324f2908447"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.065815 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6e6a092-7ac9-466c-9d70-f324f2908447-kube-api-access-gkb5x" (OuterVolumeSpecName: "kube-api-access-gkb5x") pod "b6e6a092-7ac9-466c-9d70-f324f2908447" (UID: "b6e6a092-7ac9-466c-9d70-f324f2908447"). InnerVolumeSpecName "kube-api-access-gkb5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.149273 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6e6a092-7ac9-466c-9d70-f324f2908447-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.149302 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6e6a092-7ac9-466c-9d70-f324f2908447-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.149313 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkb5x\" (UniqueName: \"kubernetes.io/projected/b6e6a092-7ac9-466c-9d70-f324f2908447-kube-api-access-gkb5x\") on node \"crc\" DevicePath \"\"" Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.616367 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" event={"ID":"b6e6a092-7ac9-466c-9d70-f324f2908447","Type":"ContainerDied","Data":"819a63cf33834f09384ea6d3cc1d1229b348ccabb1c99d2a3f50d83a15015a2a"} Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.616408 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="819a63cf33834f09384ea6d3cc1d1229b348ccabb1c99d2a3f50d83a15015a2a" Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.616421 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:05 crc kubenswrapper[4758]: I0130 10:00:05.034200 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84"] Jan 30 10:00:05 crc kubenswrapper[4758]: I0130 10:00:05.042709 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84"] Jan 30 10:00:05 crc kubenswrapper[4758]: I0130 10:00:05.781383 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c0d7e9-81d5-4f36-bc9e-56e22d853f85" path="/var/lib/kubelet/pods/31c0d7e9-81d5-4f36-bc9e-56e22d853f85/volumes" Jan 30 10:00:11 crc kubenswrapper[4758]: I0130 10:00:11.410902 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56_51f6834f-53ed-44f6-ba73-fc7275fcb395/util/0.log" Jan 30 10:00:11 crc kubenswrapper[4758]: I0130 10:00:11.668905 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56_51f6834f-53ed-44f6-ba73-fc7275fcb395/util/0.log" Jan 30 10:00:11 crc kubenswrapper[4758]: I0130 10:00:11.681309 4758 scope.go:117] "RemoveContainer" containerID="32d29d08b6aebab1b513110ea8db75577887cfa1b8e5f32a8aaa6014efa7ad2d" Jan 30 10:00:11 crc kubenswrapper[4758]: I0130 10:00:11.708235 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56_51f6834f-53ed-44f6-ba73-fc7275fcb395/pull/0.log" Jan 30 10:00:11 crc kubenswrapper[4758]: I0130 10:00:11.732688 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56_51f6834f-53ed-44f6-ba73-fc7275fcb395/pull/0.log" Jan 30 10:00:11 crc kubenswrapper[4758]: I0130 10:00:11.934063 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56_51f6834f-53ed-44f6-ba73-fc7275fcb395/pull/0.log" Jan 30 10:00:11 crc kubenswrapper[4758]: I0130 10:00:11.959133 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56_51f6834f-53ed-44f6-ba73-fc7275fcb395/extract/0.log" Jan 30 10:00:12 crc kubenswrapper[4758]: I0130 10:00:12.008606 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56_51f6834f-53ed-44f6-ba73-fc7275fcb395/util/0.log" Jan 30 10:00:12 crc kubenswrapper[4758]: I0130 10:00:12.279559 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5f9bbdc844-6cgsq_62b3bb0d-894a-4cb1-b644-d42f3cba98d7/manager/0.log" Jan 30 10:00:12 crc kubenswrapper[4758]: I0130 10:00:12.302688 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-566c8844c5-6nj4p_1c4d1258-0416-49d0-a3a5-6ece70dc0c46/manager/0.log" Jan 30 10:00:12 crc kubenswrapper[4758]: I0130 10:00:12.434987 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-8f4c5cb64-cp9km_dc189df6-25bc-4d6e-aa30-05ce0db12721/manager/0.log" Jan 30 10:00:12 crc kubenswrapper[4758]: I0130 10:00:12.570267 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784f59d4f4-sw42x_fe68673c-8979-46ee-a4aa-f95bcd7b4e8a/manager/0.log" Jan 30 10:00:12 crc kubenswrapper[4758]: I0130 10:00:12.673283 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-54985f5875-jct4c_9579fc9d-6eae-4249-ac43-35144ed58bed/manager/0.log" Jan 30 10:00:12 crc kubenswrapper[4758]: I0130 10:00:12.779429 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-vjdn9_b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2/manager/0.log" Jan 30 10:00:13 crc kubenswrapper[4758]: I0130 10:00:13.145801 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-5k6df_ef31968c-db2e-4083-a08f-19a8daf0ac2d/manager/0.log" Jan 30 10:00:13 crc kubenswrapper[4758]: I0130 10:00:13.151595 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6fd9bbb6f6-4jmpb_5c2a7d2b-62a1-468b-a3b3-fe77698a41a2/manager/0.log" Jan 30 10:00:13 crc kubenswrapper[4758]: I0130 10:00:13.348055 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-6c9d56f9bd-h89d9_b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7/manager/0.log" Jan 30 10:00:13 crc kubenswrapper[4758]: I0130 10:00:13.399463 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-74954f9f78-kxmjn_ac7c91ce-d4d9-4754-9828-a43140218228/manager/0.log" Jan 30 10:00:13 crc kubenswrapper[4758]: I0130 10:00:13.876207 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-c5d9k_0565da5c-02e0-409f-b801-e06c3e79ef47/manager/0.log" Jan 30 10:00:13 crc kubenswrapper[4758]: I0130 10:00:13.990431 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6cfc4f6754-7s6n2_b6d614d3-1ced-4b27-bd91-8edd410e5fc5/manager/0.log" Jan 30 10:00:14 crc kubenswrapper[4758]: I0130 10:00:14.206268 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-67f5956bc9-hp2mv_0b006f91-5b27-4342-935b-c7a7f174c03b/manager/0.log" Jan 30 10:00:14 crc kubenswrapper[4758]: I0130 10:00:14.251704 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-694c6dcf95-kdnzv_a104527b-98dc-4120-91b5-6e7e9466b9a3/manager/0.log" Jan 30 10:00:14 crc kubenswrapper[4758]: I0130 10:00:14.416400 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv_8cb0c6cc-e254-4dae-b433-397504fba6dc/manager/0.log" Jan 30 10:00:14 crc kubenswrapper[4758]: I0130 10:00:14.600529 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-744b85dfd5-tlqzj_10f9b3c9-c691-403e-801f-420bc2701a95/operator/0.log" Jan 30 10:00:14 crc kubenswrapper[4758]: I0130 10:00:14.880311 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-zh8ld_43724cfc-11f7-4ded-9561-4bde1020015f/registry-server/0.log" Jan 30 10:00:15 crc kubenswrapper[4758]: I0130 10:00:15.209948 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-qlc8l_03b807f9-cd53-4189-b7df-09c5ea5fdf53/manager/0.log" Jan 30 10:00:15 crc kubenswrapper[4758]: I0130 10:00:15.419072 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-vzcpf_178c1ff9-1a2a-4c4a-8258-89c267a5d0aa/manager/0.log" Jan 30 10:00:15 crc kubenswrapper[4758]: I0130 10:00:15.610787 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-gt4n2_fbb26261-18aa-4ba0-940e-788200175600/operator/0.log" Jan 30 10:00:15 crc kubenswrapper[4758]: I0130 10:00:15.897006 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-59cb5bfcb7-xtndr_0c09af13-f67b-4306-9039-f02d5f9e2f53/manager/0.log" Jan 30 10:00:15 crc kubenswrapper[4758]: I0130 10:00:15.922207 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7d4f9d9c9b-t7dht_e4787454-8070-449e-a7d0-2ff179eaaff3/manager/0.log" Jan 30 10:00:16 crc kubenswrapper[4758]: I0130 10:00:16.225960 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-pmvv4_fe039ec9-aaec-4e17-8eac-c7719245ba4d/manager/0.log" Jan 30 10:00:16 crc kubenswrapper[4758]: I0130 10:00:16.239307 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cd99594-xhszj_b271df00-e9f2-4c58-94e7-22ea4b7d7eaf/manager/0.log" Jan 30 10:00:16 crc kubenswrapper[4758]: I0130 10:00:16.437079 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5bf648c946-ngzmc_73790ffa-61b1-489c-94c9-3934af94185f/manager/0.log" Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.387834 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.388403 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.388447 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.389160 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ec033c17193dd2eddbcd2a6076b6c2f50aead98129c8a6395cc816bd312b9c8"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.389222 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://5ec033c17193dd2eddbcd2a6076b6c2f50aead98129c8a6395cc816bd312b9c8" gracePeriod=600 Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.787832 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="5ec033c17193dd2eddbcd2a6076b6c2f50aead98129c8a6395cc816bd312b9c8" exitCode=0 Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.787928 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"5ec033c17193dd2eddbcd2a6076b6c2f50aead98129c8a6395cc816bd312b9c8"} Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.788107 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916"} Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.788126 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 10:00:35 crc kubenswrapper[4758]: I0130 10:00:35.920223 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hd9qf_3934d9aa-0054-4ed3-a2e8-a57dc60dad77/control-plane-machine-set-operator/0.log" Jan 30 10:00:36 crc kubenswrapper[4758]: I0130 10:00:36.114881 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q592c_8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972/machine-api-operator/0.log" Jan 30 10:00:36 crc kubenswrapper[4758]: I0130 10:00:36.124482 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q592c_8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972/kube-rbac-proxy/0.log" Jan 30 10:00:41 crc kubenswrapper[4758]: I0130 10:00:41.926187 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4xszq"] Jan 30 10:00:41 crc kubenswrapper[4758]: E0130 10:00:41.927275 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e6a092-7ac9-466c-9d70-f324f2908447" containerName="collect-profiles" Jan 30 10:00:41 crc kubenswrapper[4758]: I0130 10:00:41.927292 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e6a092-7ac9-466c-9d70-f324f2908447" containerName="collect-profiles" Jan 30 10:00:41 crc kubenswrapper[4758]: I0130 10:00:41.927541 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6e6a092-7ac9-466c-9d70-f324f2908447" containerName="collect-profiles" Jan 30 10:00:41 crc kubenswrapper[4758]: I0130 10:00:41.929282 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:41 crc kubenswrapper[4758]: I0130 10:00:41.950940 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xszq"] Jan 30 10:00:41 crc kubenswrapper[4758]: I0130 10:00:41.984331 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qkwm\" (UniqueName: \"kubernetes.io/projected/e37ad05e-bde5-49eb-81da-eb24781ce575-kube-api-access-5qkwm\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:41 crc kubenswrapper[4758]: I0130 10:00:41.984384 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-utilities\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:41 crc kubenswrapper[4758]: I0130 10:00:41.984542 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-catalog-content\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.086197 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-catalog-content\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.086308 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qkwm\" (UniqueName: \"kubernetes.io/projected/e37ad05e-bde5-49eb-81da-eb24781ce575-kube-api-access-5qkwm\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.086336 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-utilities\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.086842 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-utilities\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.087123 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-catalog-content\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.110707 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qkwm\" (UniqueName: \"kubernetes.io/projected/e37ad05e-bde5-49eb-81da-eb24781ce575-kube-api-access-5qkwm\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.249948 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.755194 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xszq"] Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.927470 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jptgp"] Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.929298 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.944279 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jptgp"] Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.967639 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xszq" event={"ID":"e37ad05e-bde5-49eb-81da-eb24781ce575","Type":"ContainerStarted","Data":"074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a"} Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.967697 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xszq" event={"ID":"e37ad05e-bde5-49eb-81da-eb24781ce575","Type":"ContainerStarted","Data":"fc255179fddadcb8aafebdb63864e466d1c15cf8f2e2eebf0300197b548eb2a8"} Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.008005 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-catalog-content\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.008078 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-utilities\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.008282 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wphk\" (UniqueName: \"kubernetes.io/projected/74ec10d5-be61-406f-949a-38c5457d385a-kube-api-access-4wphk\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.109761 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-catalog-content\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.109836 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-utilities\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.110053 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wphk\" (UniqueName: \"kubernetes.io/projected/74ec10d5-be61-406f-949a-38c5457d385a-kube-api-access-4wphk\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.110918 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-catalog-content\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.111202 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-utilities\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.137951 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wphk\" (UniqueName: \"kubernetes.io/projected/74ec10d5-be61-406f-949a-38c5457d385a-kube-api-access-4wphk\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.251716 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.580322 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jptgp"] Jan 30 10:00:43 crc kubenswrapper[4758]: W0130 10:00:43.585155 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74ec10d5_be61_406f_949a_38c5457d385a.slice/crio-9e4c477e9aad1517b63bd45ac2d6106f123978a3ead5088e088726be68eb500e WatchSource:0}: Error finding container 9e4c477e9aad1517b63bd45ac2d6106f123978a3ead5088e088726be68eb500e: Status 404 returned error can't find the container with id 9e4c477e9aad1517b63bd45ac2d6106f123978a3ead5088e088726be68eb500e Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.976773 4758 generic.go:334] "Generic (PLEG): container finished" podID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerID="074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a" exitCode=0 Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.976839 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xszq" event={"ID":"e37ad05e-bde5-49eb-81da-eb24781ce575","Type":"ContainerDied","Data":"074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a"} Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.980168 4758 generic.go:334] "Generic (PLEG): container finished" podID="74ec10d5-be61-406f-949a-38c5457d385a" containerID="e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b" exitCode=0 Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.980202 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jptgp" event={"ID":"74ec10d5-be61-406f-949a-38c5457d385a","Type":"ContainerDied","Data":"e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b"} Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.980226 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jptgp" event={"ID":"74ec10d5-be61-406f-949a-38c5457d385a","Type":"ContainerStarted","Data":"9e4c477e9aad1517b63bd45ac2d6106f123978a3ead5088e088726be68eb500e"} Jan 30 10:00:44 crc kubenswrapper[4758]: I0130 10:00:44.989346 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jptgp" event={"ID":"74ec10d5-be61-406f-949a-38c5457d385a","Type":"ContainerStarted","Data":"39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363"} Jan 30 10:00:44 crc kubenswrapper[4758]: I0130 10:00:44.990956 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xszq" event={"ID":"e37ad05e-bde5-49eb-81da-eb24781ce575","Type":"ContainerStarted","Data":"a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a"} Jan 30 10:00:46 crc kubenswrapper[4758]: I0130 10:00:46.001100 4758 generic.go:334] "Generic (PLEG): container finished" podID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerID="a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a" exitCode=0 Jan 30 10:00:46 crc kubenswrapper[4758]: I0130 10:00:46.001189 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xszq" event={"ID":"e37ad05e-bde5-49eb-81da-eb24781ce575","Type":"ContainerDied","Data":"a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a"} Jan 30 10:00:47 crc kubenswrapper[4758]: I0130 10:00:47.011847 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xszq" event={"ID":"e37ad05e-bde5-49eb-81da-eb24781ce575","Type":"ContainerStarted","Data":"ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3"} Jan 30 10:00:50 crc kubenswrapper[4758]: I0130 10:00:50.036372 4758 generic.go:334] "Generic (PLEG): container finished" podID="74ec10d5-be61-406f-949a-38c5457d385a" containerID="39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363" exitCode=0 Jan 30 10:00:50 crc kubenswrapper[4758]: I0130 10:00:50.036439 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jptgp" event={"ID":"74ec10d5-be61-406f-949a-38c5457d385a","Type":"ContainerDied","Data":"39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363"} Jan 30 10:00:50 crc kubenswrapper[4758]: I0130 10:00:50.070071 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4xszq" podStartSLOduration=6.387796912 podStartE2EDuration="9.070024241s" podCreationTimestamp="2026-01-30 10:00:41 +0000 UTC" firstStartedPulling="2026-01-30 10:00:43.979084861 +0000 UTC m=+5448.951396412" lastFinishedPulling="2026-01-30 10:00:46.66131219 +0000 UTC m=+5451.633623741" observedRunningTime="2026-01-30 10:00:47.038797443 +0000 UTC m=+5452.011109004" watchObservedRunningTime="2026-01-30 10:00:50.070024241 +0000 UTC m=+5455.042335792" Jan 30 10:00:50 crc kubenswrapper[4758]: I0130 10:00:50.927873 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-gzhzw_3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb/cert-manager-controller/0.log" Jan 30 10:00:51 crc kubenswrapper[4758]: I0130 10:00:51.047811 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jptgp" event={"ID":"74ec10d5-be61-406f-949a-38c5457d385a","Type":"ContainerStarted","Data":"f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2"} Jan 30 10:00:51 crc kubenswrapper[4758]: I0130 10:00:51.095614 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jptgp" podStartSLOduration=2.333008473 podStartE2EDuration="9.095590979s" podCreationTimestamp="2026-01-30 10:00:42 +0000 UTC" firstStartedPulling="2026-01-30 10:00:43.981867478 +0000 UTC m=+5448.954179029" lastFinishedPulling="2026-01-30 10:00:50.744449984 +0000 UTC m=+5455.716761535" observedRunningTime="2026-01-30 10:00:51.075383794 +0000 UTC m=+5456.047695405" watchObservedRunningTime="2026-01-30 10:00:51.095590979 +0000 UTC m=+5456.067902540" Jan 30 10:00:51 crc kubenswrapper[4758]: I0130 10:00:51.157939 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-fpg8k_738fc587-0a87-41b5-b2b0-690fa92d754e/cert-manager-cainjector/0.log" Jan 30 10:00:51 crc kubenswrapper[4758]: I0130 10:00:51.344396 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-xrm7r_b55a68e5-f198-4525-9d07-2acdcb906d36/cert-manager-webhook/0.log" Jan 30 10:00:52 crc kubenswrapper[4758]: I0130 10:00:52.251486 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:52 crc kubenswrapper[4758]: I0130 10:00:52.251754 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:52 crc kubenswrapper[4758]: I0130 10:00:52.307008 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:53 crc kubenswrapper[4758]: I0130 10:00:53.109633 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:53 crc kubenswrapper[4758]: I0130 10:00:53.253002 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:53 crc kubenswrapper[4758]: I0130 10:00:53.253053 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:53 crc kubenswrapper[4758]: I0130 10:00:53.720655 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xszq"] Jan 30 10:00:54 crc kubenswrapper[4758]: I0130 10:00:54.299535 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jptgp" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="registry-server" probeResult="failure" output=< Jan 30 10:00:54 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 10:00:54 crc kubenswrapper[4758]: > Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.080897 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4xszq" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerName="registry-server" containerID="cri-o://ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3" gracePeriod=2 Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.564510 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.659741 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-catalog-content\") pod \"e37ad05e-bde5-49eb-81da-eb24781ce575\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.659892 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-utilities\") pod \"e37ad05e-bde5-49eb-81da-eb24781ce575\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.660024 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qkwm\" (UniqueName: \"kubernetes.io/projected/e37ad05e-bde5-49eb-81da-eb24781ce575-kube-api-access-5qkwm\") pod \"e37ad05e-bde5-49eb-81da-eb24781ce575\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.660430 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-utilities" (OuterVolumeSpecName: "utilities") pod "e37ad05e-bde5-49eb-81da-eb24781ce575" (UID: "e37ad05e-bde5-49eb-81da-eb24781ce575"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.660701 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.672314 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e37ad05e-bde5-49eb-81da-eb24781ce575-kube-api-access-5qkwm" (OuterVolumeSpecName: "kube-api-access-5qkwm") pod "e37ad05e-bde5-49eb-81da-eb24781ce575" (UID: "e37ad05e-bde5-49eb-81da-eb24781ce575"). InnerVolumeSpecName "kube-api-access-5qkwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.687527 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e37ad05e-bde5-49eb-81da-eb24781ce575" (UID: "e37ad05e-bde5-49eb-81da-eb24781ce575"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.764198 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.764250 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qkwm\" (UniqueName: \"kubernetes.io/projected/e37ad05e-bde5-49eb-81da-eb24781ce575-kube-api-access-5qkwm\") on node \"crc\" DevicePath \"\"" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.089680 4758 generic.go:334] "Generic (PLEG): container finished" podID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerID="ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3" exitCode=0 Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.089734 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xszq" event={"ID":"e37ad05e-bde5-49eb-81da-eb24781ce575","Type":"ContainerDied","Data":"ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3"} Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.089755 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.089770 4758 scope.go:117] "RemoveContainer" containerID="ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.089759 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xszq" event={"ID":"e37ad05e-bde5-49eb-81da-eb24781ce575","Type":"ContainerDied","Data":"fc255179fddadcb8aafebdb63864e466d1c15cf8f2e2eebf0300197b548eb2a8"} Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.114540 4758 scope.go:117] "RemoveContainer" containerID="a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.120288 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xszq"] Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.134784 4758 scope.go:117] "RemoveContainer" containerID="074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.147866 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xszq"] Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.177338 4758 scope.go:117] "RemoveContainer" containerID="ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3" Jan 30 10:00:56 crc kubenswrapper[4758]: E0130 10:00:56.177806 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3\": container with ID starting with ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3 not found: ID does not exist" containerID="ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.177914 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3"} err="failed to get container status \"ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3\": rpc error: code = NotFound desc = could not find container \"ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3\": container with ID starting with ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3 not found: ID does not exist" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.177988 4758 scope.go:117] "RemoveContainer" containerID="a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a" Jan 30 10:00:56 crc kubenswrapper[4758]: E0130 10:00:56.178477 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a\": container with ID starting with a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a not found: ID does not exist" containerID="a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.178587 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a"} err="failed to get container status \"a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a\": rpc error: code = NotFound desc = could not find container \"a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a\": container with ID starting with a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a not found: ID does not exist" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.178660 4758 scope.go:117] "RemoveContainer" containerID="074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a" Jan 30 10:00:56 crc kubenswrapper[4758]: E0130 10:00:56.178964 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a\": container with ID starting with 074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a not found: ID does not exist" containerID="074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.178987 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a"} err="failed to get container status \"074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a\": rpc error: code = NotFound desc = could not find container \"074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a\": container with ID starting with 074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a not found: ID does not exist" Jan 30 10:00:57 crc kubenswrapper[4758]: I0130 10:00:57.781542 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" path="/var/lib/kubelet/pods/e37ad05e-bde5-49eb-81da-eb24781ce575/volumes" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.154469 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496121-22z2n"] Jan 30 10:01:00 crc kubenswrapper[4758]: E0130 10:01:00.155192 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerName="extract-content" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.155210 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerName="extract-content" Jan 30 10:01:00 crc kubenswrapper[4758]: E0130 10:01:00.155235 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerName="registry-server" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.155244 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerName="registry-server" Jan 30 10:01:00 crc kubenswrapper[4758]: E0130 10:01:00.155279 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerName="extract-utilities" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.155287 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerName="extract-utilities" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.155480 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerName="registry-server" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.156177 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.166637 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496121-22z2n"] Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.341027 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q5rc\" (UniqueName: \"kubernetes.io/projected/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-kube-api-access-8q5rc\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.341133 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-combined-ca-bundle\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.341168 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-config-data\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.341206 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-fernet-keys\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.443404 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-config-data\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.443842 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-fernet-keys\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.443993 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q5rc\" (UniqueName: \"kubernetes.io/projected/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-kube-api-access-8q5rc\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.444173 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-combined-ca-bundle\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.451282 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-fernet-keys\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.453424 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-config-data\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.453764 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-combined-ca-bundle\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.464323 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q5rc\" (UniqueName: \"kubernetes.io/projected/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-kube-api-access-8q5rc\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.473941 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.949671 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496121-22z2n"] Jan 30 10:01:01 crc kubenswrapper[4758]: I0130 10:01:01.133068 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496121-22z2n" event={"ID":"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28","Type":"ContainerStarted","Data":"131f208ad9c0f5ca08e81a316be162ec993578076d588ef58a15412be61bdbbc"} Jan 30 10:01:02 crc kubenswrapper[4758]: I0130 10:01:02.144273 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496121-22z2n" event={"ID":"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28","Type":"ContainerStarted","Data":"8f7ba1e3736ab74314c8e0b4485ee4087f0f9c9c1db720ef32172059aed12689"} Jan 30 10:01:02 crc kubenswrapper[4758]: I0130 10:01:02.160961 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496121-22z2n" podStartSLOduration=2.160945838 podStartE2EDuration="2.160945838s" podCreationTimestamp="2026-01-30 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:01:02.157357785 +0000 UTC m=+5467.129669336" watchObservedRunningTime="2026-01-30 10:01:02.160945838 +0000 UTC m=+5467.133257389" Jan 30 10:01:04 crc kubenswrapper[4758]: I0130 10:01:04.298284 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jptgp" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="registry-server" probeResult="failure" output=< Jan 30 10:01:04 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 10:01:04 crc kubenswrapper[4758]: > Jan 30 10:01:05 crc kubenswrapper[4758]: I0130 10:01:05.185700 4758 generic.go:334] "Generic (PLEG): container finished" podID="25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28" containerID="8f7ba1e3736ab74314c8e0b4485ee4087f0f9c9c1db720ef32172059aed12689" exitCode=0 Jan 30 10:01:05 crc kubenswrapper[4758]: I0130 10:01:05.185793 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496121-22z2n" event={"ID":"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28","Type":"ContainerDied","Data":"8f7ba1e3736ab74314c8e0b4485ee4087f0f9c9c1db720ef32172059aed12689"} Jan 30 10:01:05 crc kubenswrapper[4758]: I0130 10:01:05.559162 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-5w4jj_edf1a46e-1ddb-45c6-b545-911d0f651ee9/nmstate-console-plugin/0.log" Jan 30 10:01:05 crc kubenswrapper[4758]: I0130 10:01:05.811984 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-jjswr_619a7108-d329-4b73-84eb-4258a2bfe118/nmstate-handler/0.log" Jan 30 10:01:05 crc kubenswrapper[4758]: I0130 10:01:05.814869 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-9d5tr_767b8ab5-0385-4ee5-a65c-20d58550812e/kube-rbac-proxy/0.log" Jan 30 10:01:05 crc kubenswrapper[4758]: I0130 10:01:05.886337 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-9d5tr_767b8ab5-0385-4ee5-a65c-20d58550812e/nmstate-metrics/0.log" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.112515 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-r9vhp_1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1/nmstate-operator/0.log" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.159116 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-tp5p2_b3b445ff-f326-4f7a-9f01-557ea0ac488e/nmstate-webhook/0.log" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.598365 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.767840 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-fernet-keys\") pod \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.767915 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q5rc\" (UniqueName: \"kubernetes.io/projected/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-kube-api-access-8q5rc\") pod \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.768004 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-combined-ca-bundle\") pod \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.768109 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-config-data\") pod \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.780278 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28" (UID: "25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.780545 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-kube-api-access-8q5rc" (OuterVolumeSpecName: "kube-api-access-8q5rc") pod "25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28" (UID: "25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28"). InnerVolumeSpecName "kube-api-access-8q5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.831610 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-config-data" (OuterVolumeSpecName: "config-data") pod "25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28" (UID: "25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.840186 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28" (UID: "25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.870544 4758 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.870578 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q5rc\" (UniqueName: \"kubernetes.io/projected/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-kube-api-access-8q5rc\") on node \"crc\" DevicePath \"\"" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.870588 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.870597 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:01:07 crc kubenswrapper[4758]: I0130 10:01:07.203959 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496121-22z2n" event={"ID":"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28","Type":"ContainerDied","Data":"131f208ad9c0f5ca08e81a316be162ec993578076d588ef58a15412be61bdbbc"} Jan 30 10:01:07 crc kubenswrapper[4758]: I0130 10:01:07.204319 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="131f208ad9c0f5ca08e81a316be162ec993578076d588ef58a15412be61bdbbc" Jan 30 10:01:07 crc kubenswrapper[4758]: I0130 10:01:07.203998 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:14 crc kubenswrapper[4758]: I0130 10:01:14.300488 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jptgp" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="registry-server" probeResult="failure" output=< Jan 30 10:01:14 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 10:01:14 crc kubenswrapper[4758]: > Jan 30 10:01:24 crc kubenswrapper[4758]: I0130 10:01:24.296398 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jptgp" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="registry-server" probeResult="failure" output=< Jan 30 10:01:24 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 10:01:24 crc kubenswrapper[4758]: > Jan 30 10:01:33 crc kubenswrapper[4758]: I0130 10:01:33.296390 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:01:33 crc kubenswrapper[4758]: I0130 10:01:33.349477 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:01:33 crc kubenswrapper[4758]: I0130 10:01:33.529579 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jptgp"] Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.063513 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-28gk8_30993dee-7712-48e7-a156-86293a84ea40/kube-rbac-proxy/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.094755 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-28gk8_30993dee-7712-48e7-a156-86293a84ea40/controller/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.309162 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-frr-files/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.514542 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jptgp" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="registry-server" containerID="cri-o://f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2" gracePeriod=2 Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.546024 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-reloader/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.548355 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-metrics/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.600387 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-frr-files/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.641463 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-reloader/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.825790 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-reloader/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.887135 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-metrics/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.913925 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-frr-files/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.959977 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-metrics/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.112117 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.185222 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-frr-files/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.212181 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-reloader/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.240614 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-metrics/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.290082 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-utilities\") pod \"74ec10d5-be61-406f-949a-38c5457d385a\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.290365 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-catalog-content\") pod \"74ec10d5-be61-406f-949a-38c5457d385a\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.290406 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wphk\" (UniqueName: \"kubernetes.io/projected/74ec10d5-be61-406f-949a-38c5457d385a-kube-api-access-4wphk\") pod \"74ec10d5-be61-406f-949a-38c5457d385a\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.290750 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-utilities" (OuterVolumeSpecName: "utilities") pod "74ec10d5-be61-406f-949a-38c5457d385a" (UID: "74ec10d5-be61-406f-949a-38c5457d385a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.304329 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74ec10d5-be61-406f-949a-38c5457d385a-kube-api-access-4wphk" (OuterVolumeSpecName: "kube-api-access-4wphk") pod "74ec10d5-be61-406f-949a-38c5457d385a" (UID: "74ec10d5-be61-406f-949a-38c5457d385a"). InnerVolumeSpecName "kube-api-access-4wphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.309995 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/controller/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.392344 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wphk\" (UniqueName: \"kubernetes.io/projected/74ec10d5-be61-406f-949a-38c5457d385a-kube-api-access-4wphk\") on node \"crc\" DevicePath \"\"" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.393118 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.398895 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74ec10d5-be61-406f-949a-38c5457d385a" (UID: "74ec10d5-be61-406f-949a-38c5457d385a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.457414 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/frr-metrics/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.494424 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.528194 4758 generic.go:334] "Generic (PLEG): container finished" podID="74ec10d5-be61-406f-949a-38c5457d385a" containerID="f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2" exitCode=0 Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.528297 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jptgp" event={"ID":"74ec10d5-be61-406f-949a-38c5457d385a","Type":"ContainerDied","Data":"f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2"} Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.528364 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jptgp" event={"ID":"74ec10d5-be61-406f-949a-38c5457d385a","Type":"ContainerDied","Data":"9e4c477e9aad1517b63bd45ac2d6106f123978a3ead5088e088726be68eb500e"} Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.528429 4758 scope.go:117] "RemoveContainer" containerID="f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.528679 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.560125 4758 scope.go:117] "RemoveContainer" containerID="39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.582093 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/kube-rbac-proxy/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.615666 4758 scope.go:117] "RemoveContainer" containerID="e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.627134 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jptgp"] Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.657083 4758 scope.go:117] "RemoveContainer" containerID="f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2" Jan 30 10:01:35 crc kubenswrapper[4758]: E0130 10:01:35.659944 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2\": container with ID starting with f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2 not found: ID does not exist" containerID="f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.659990 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2"} err="failed to get container status \"f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2\": rpc error: code = NotFound desc = could not find container \"f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2\": container with ID starting with f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2 not found: ID does not exist" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.660065 4758 scope.go:117] "RemoveContainer" containerID="39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363" Jan 30 10:01:35 crc kubenswrapper[4758]: E0130 10:01:35.664159 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363\": container with ID starting with 39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363 not found: ID does not exist" containerID="39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.664193 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363"} err="failed to get container status \"39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363\": rpc error: code = NotFound desc = could not find container \"39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363\": container with ID starting with 39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363 not found: ID does not exist" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.664216 4758 scope.go:117] "RemoveContainer" containerID="e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b" Jan 30 10:01:35 crc kubenswrapper[4758]: E0130 10:01:35.668240 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b\": container with ID starting with e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b not found: ID does not exist" containerID="e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.668287 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b"} err="failed to get container status \"e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b\": rpc error: code = NotFound desc = could not find container \"e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b\": container with ID starting with e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b not found: ID does not exist" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.673366 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/kube-rbac-proxy-frr/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.673670 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jptgp"] Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.780244 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/reloader/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.782303 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74ec10d5-be61-406f-949a-38c5457d385a" path="/var/lib/kubelet/pods/74ec10d5-be61-406f-949a-38c5457d385a/volumes" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.920698 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-kjhjn_0393e366-eeba-40c9-8020-9b16d0092dfd/frr-k8s-webhook-server/0.log" Jan 30 10:01:36 crc kubenswrapper[4758]: I0130 10:01:36.257455 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-75c8688755-rr2t4_29744c9b-d424-4ac9-b224-fe0956166373/manager/0.log" Jan 30 10:01:36 crc kubenswrapper[4758]: I0130 10:01:36.323442 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7ffc5c558b-h88wr_79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5/webhook-server/0.log" Jan 30 10:01:36 crc kubenswrapper[4758]: I0130 10:01:36.585735 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-67xgc_10c38902-7117-4dc3-ad90-eb26dd9656de/kube-rbac-proxy/0.log" Jan 30 10:01:36 crc kubenswrapper[4758]: I0130 10:01:36.986438 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/frr/0.log" Jan 30 10:01:37 crc kubenswrapper[4758]: I0130 10:01:37.092557 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-67xgc_10c38902-7117-4dc3-ad90-eb26dd9656de/speaker/0.log" Jan 30 10:01:51 crc kubenswrapper[4758]: I0130 10:01:51.800427 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r_749e159a-10a8-4704-a263-3ec389807647/util/0.log" Jan 30 10:01:51 crc kubenswrapper[4758]: I0130 10:01:51.998201 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r_749e159a-10a8-4704-a263-3ec389807647/util/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.038084 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r_749e159a-10a8-4704-a263-3ec389807647/pull/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.058835 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r_749e159a-10a8-4704-a263-3ec389807647/pull/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.233985 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r_749e159a-10a8-4704-a263-3ec389807647/util/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.272198 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r_749e159a-10a8-4704-a263-3ec389807647/pull/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.347474 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r_749e159a-10a8-4704-a263-3ec389807647/extract/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.459692 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j_e581bd2a-bbb4-476e-821e-f55ba597f41e/util/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.697810 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j_e581bd2a-bbb4-476e-821e-f55ba597f41e/util/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.736987 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j_e581bd2a-bbb4-476e-821e-f55ba597f41e/pull/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.758016 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j_e581bd2a-bbb4-476e-821e-f55ba597f41e/pull/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.904737 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j_e581bd2a-bbb4-476e-821e-f55ba597f41e/util/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.907679 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j_e581bd2a-bbb4-476e-821e-f55ba597f41e/pull/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.945251 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j_e581bd2a-bbb4-476e-821e-f55ba597f41e/extract/0.log" Jan 30 10:01:53 crc kubenswrapper[4758]: I0130 10:01:53.088028 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tq2j_73f8c779-64cc-4d7d-8762-4f8cf1611071/extract-utilities/0.log" Jan 30 10:01:53 crc kubenswrapper[4758]: I0130 10:01:53.275298 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tq2j_73f8c779-64cc-4d7d-8762-4f8cf1611071/extract-utilities/0.log" Jan 30 10:01:53 crc kubenswrapper[4758]: I0130 10:01:53.341781 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tq2j_73f8c779-64cc-4d7d-8762-4f8cf1611071/extract-content/0.log" Jan 30 10:01:53 crc kubenswrapper[4758]: I0130 10:01:53.346341 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tq2j_73f8c779-64cc-4d7d-8762-4f8cf1611071/extract-content/0.log" Jan 30 10:01:53 crc kubenswrapper[4758]: I0130 10:01:53.507844 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tq2j_73f8c779-64cc-4d7d-8762-4f8cf1611071/extract-content/0.log" Jan 30 10:01:53 crc kubenswrapper[4758]: I0130 10:01:53.515254 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tq2j_73f8c779-64cc-4d7d-8762-4f8cf1611071/extract-utilities/0.log" Jan 30 10:01:53 crc kubenswrapper[4758]: I0130 10:01:53.817170 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6v7r_76812ff6-8f58-4c5c-8606-3cc8f949146e/extract-utilities/0.log" Jan 30 10:01:54 crc kubenswrapper[4758]: I0130 10:01:54.155820 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6v7r_76812ff6-8f58-4c5c-8606-3cc8f949146e/extract-content/0.log" Jan 30 10:01:54 crc kubenswrapper[4758]: I0130 10:01:54.162695 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tq2j_73f8c779-64cc-4d7d-8762-4f8cf1611071/registry-server/0.log" Jan 30 10:01:54 crc kubenswrapper[4758]: I0130 10:01:54.181573 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6v7r_76812ff6-8f58-4c5c-8606-3cc8f949146e/extract-utilities/0.log" Jan 30 10:01:54 crc kubenswrapper[4758]: I0130 10:01:54.192919 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6v7r_76812ff6-8f58-4c5c-8606-3cc8f949146e/extract-content/0.log" Jan 30 10:01:54 crc kubenswrapper[4758]: I0130 10:01:54.344993 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6v7r_76812ff6-8f58-4c5c-8606-3cc8f949146e/extract-utilities/0.log" Jan 30 10:01:54 crc kubenswrapper[4758]: I0130 10:01:54.483721 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6v7r_76812ff6-8f58-4c5c-8606-3cc8f949146e/extract-content/0.log" Jan 30 10:01:54 crc kubenswrapper[4758]: I0130 10:01:54.642341 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lsfmf_f34a2860-1860-4032-8f5d-9278338c1b19/marketplace-operator/0.log" Jan 30 10:01:54 crc kubenswrapper[4758]: I0130 10:01:54.932984 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kb7fp_db6d53d3-8c72-4f16-9bf1-f196d3c85e3a/extract-utilities/0.log" Jan 30 10:01:55 crc kubenswrapper[4758]: I0130 10:01:55.034658 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6v7r_76812ff6-8f58-4c5c-8606-3cc8f949146e/registry-server/0.log" Jan 30 10:01:55 crc kubenswrapper[4758]: I0130 10:01:55.043638 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kb7fp_db6d53d3-8c72-4f16-9bf1-f196d3c85e3a/extract-content/0.log" Jan 30 10:01:55 crc kubenswrapper[4758]: I0130 10:01:55.043801 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kb7fp_db6d53d3-8c72-4f16-9bf1-f196d3c85e3a/extract-utilities/0.log" Jan 30 10:01:55 crc kubenswrapper[4758]: I0130 10:01:55.189281 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kb7fp_db6d53d3-8c72-4f16-9bf1-f196d3c85e3a/extract-content/0.log" Jan 30 10:01:55 crc kubenswrapper[4758]: I0130 10:01:55.442492 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kb7fp_db6d53d3-8c72-4f16-9bf1-f196d3c85e3a/extract-utilities/0.log" Jan 30 10:01:55 crc kubenswrapper[4758]: I0130 10:01:55.448863 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kb7fp_db6d53d3-8c72-4f16-9bf1-f196d3c85e3a/extract-content/0.log" Jan 30 10:01:55 crc kubenswrapper[4758]: I0130 10:01:55.667859 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kb7fp_db6d53d3-8c72-4f16-9bf1-f196d3c85e3a/registry-server/0.log" Jan 30 10:01:55 crc kubenswrapper[4758]: I0130 10:01:55.777303 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fmg9d_8404e227-68e2-4686-a04d-00048ba303ec/extract-utilities/0.log" Jan 30 10:01:56 crc kubenswrapper[4758]: I0130 10:01:56.006508 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fmg9d_8404e227-68e2-4686-a04d-00048ba303ec/extract-utilities/0.log" Jan 30 10:01:56 crc kubenswrapper[4758]: I0130 10:01:56.011580 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fmg9d_8404e227-68e2-4686-a04d-00048ba303ec/extract-content/0.log" Jan 30 10:01:56 crc kubenswrapper[4758]: I0130 10:01:56.031417 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fmg9d_8404e227-68e2-4686-a04d-00048ba303ec/extract-content/0.log" Jan 30 10:01:56 crc kubenswrapper[4758]: I0130 10:01:56.264902 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fmg9d_8404e227-68e2-4686-a04d-00048ba303ec/extract-utilities/0.log" Jan 30 10:01:56 crc kubenswrapper[4758]: I0130 10:01:56.306896 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fmg9d_8404e227-68e2-4686-a04d-00048ba303ec/extract-content/0.log" Jan 30 10:01:56 crc kubenswrapper[4758]: I0130 10:01:56.960403 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fmg9d_8404e227-68e2-4686-a04d-00048ba303ec/registry-server/0.log" Jan 30 10:02:22 crc kubenswrapper[4758]: I0130 10:02:22.387702 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:02:22 crc kubenswrapper[4758]: I0130 10:02:22.388245 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:02:52 crc kubenswrapper[4758]: I0130 10:02:52.387197 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:02:52 crc kubenswrapper[4758]: I0130 10:02:52.387873 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:03:22 crc kubenswrapper[4758]: I0130 10:03:22.387702 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:03:22 crc kubenswrapper[4758]: I0130 10:03:22.389500 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:03:22 crc kubenswrapper[4758]: I0130 10:03:22.389573 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 10:03:22 crc kubenswrapper[4758]: I0130 10:03:22.390309 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:03:22 crc kubenswrapper[4758]: I0130 10:03:22.390366 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" gracePeriod=600 Jan 30 10:03:22 crc kubenswrapper[4758]: E0130 10:03:22.521260 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:03:23 crc kubenswrapper[4758]: I0130 10:03:23.450854 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" exitCode=0 Jan 30 10:03:23 crc kubenswrapper[4758]: I0130 10:03:23.450908 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916"} Jan 30 10:03:23 crc kubenswrapper[4758]: I0130 10:03:23.451238 4758 scope.go:117] "RemoveContainer" containerID="5ec033c17193dd2eddbcd2a6076b6c2f50aead98129c8a6395cc816bd312b9c8" Jan 30 10:03:23 crc kubenswrapper[4758]: I0130 10:03:23.451825 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:03:23 crc kubenswrapper[4758]: E0130 10:03:23.452106 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:03:35 crc kubenswrapper[4758]: I0130 10:03:35.774757 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:03:35 crc kubenswrapper[4758]: E0130 10:03:35.775510 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.826210 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vgrtm"] Jan 30 10:03:41 crc kubenswrapper[4758]: E0130 10:03:41.827104 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28" containerName="keystone-cron" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.827118 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28" containerName="keystone-cron" Jan 30 10:03:41 crc kubenswrapper[4758]: E0130 10:03:41.827141 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="registry-server" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.827149 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="registry-server" Jan 30 10:03:41 crc kubenswrapper[4758]: E0130 10:03:41.827162 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="extract-content" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.827169 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="extract-content" Jan 30 10:03:41 crc kubenswrapper[4758]: E0130 10:03:41.827192 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="extract-utilities" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.827199 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="extract-utilities" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.827408 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="registry-server" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.827435 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28" containerName="keystone-cron" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.828734 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.836031 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vgrtm"] Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.889452 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-utilities\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.889522 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-catalog-content\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.889545 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5dks\" (UniqueName: \"kubernetes.io/projected/7ff04746-69db-463d-8b3e-23b69fa6a995-kube-api-access-t5dks\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.991555 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-utilities\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.992009 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-catalog-content\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.992076 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5dks\" (UniqueName: \"kubernetes.io/projected/7ff04746-69db-463d-8b3e-23b69fa6a995-kube-api-access-t5dks\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.992328 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-catalog-content\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.992444 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-utilities\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:42 crc kubenswrapper[4758]: I0130 10:03:42.015238 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5dks\" (UniqueName: \"kubernetes.io/projected/7ff04746-69db-463d-8b3e-23b69fa6a995-kube-api-access-t5dks\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:42 crc kubenswrapper[4758]: I0130 10:03:42.147356 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:42 crc kubenswrapper[4758]: W0130 10:03:42.831685 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice/crio-b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d WatchSource:0}: Error finding container b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d: Status 404 returned error can't find the container with id b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d Jan 30 10:03:42 crc kubenswrapper[4758]: I0130 10:03:42.869206 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vgrtm"] Jan 30 10:03:43 crc kubenswrapper[4758]: I0130 10:03:43.638189 4758 generic.go:334] "Generic (PLEG): container finished" podID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerID="fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859" exitCode=0 Jan 30 10:03:43 crc kubenswrapper[4758]: I0130 10:03:43.638361 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgrtm" event={"ID":"7ff04746-69db-463d-8b3e-23b69fa6a995","Type":"ContainerDied","Data":"fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859"} Jan 30 10:03:43 crc kubenswrapper[4758]: I0130 10:03:43.638486 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgrtm" event={"ID":"7ff04746-69db-463d-8b3e-23b69fa6a995","Type":"ContainerStarted","Data":"b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d"} Jan 30 10:03:43 crc kubenswrapper[4758]: I0130 10:03:43.641540 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:03:44 crc kubenswrapper[4758]: I0130 10:03:44.647650 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgrtm" event={"ID":"7ff04746-69db-463d-8b3e-23b69fa6a995","Type":"ContainerStarted","Data":"8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21"} Jan 30 10:03:45 crc kubenswrapper[4758]: I0130 10:03:45.657865 4758 generic.go:334] "Generic (PLEG): container finished" podID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerID="8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21" exitCode=0 Jan 30 10:03:45 crc kubenswrapper[4758]: I0130 10:03:45.658098 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgrtm" event={"ID":"7ff04746-69db-463d-8b3e-23b69fa6a995","Type":"ContainerDied","Data":"8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21"} Jan 30 10:03:46 crc kubenswrapper[4758]: I0130 10:03:46.669211 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgrtm" event={"ID":"7ff04746-69db-463d-8b3e-23b69fa6a995","Type":"ContainerStarted","Data":"a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112"} Jan 30 10:03:46 crc kubenswrapper[4758]: I0130 10:03:46.694535 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vgrtm" podStartSLOduration=3.193368818 podStartE2EDuration="5.694517899s" podCreationTimestamp="2026-01-30 10:03:41 +0000 UTC" firstStartedPulling="2026-01-30 10:03:43.641332358 +0000 UTC m=+5628.613643909" lastFinishedPulling="2026-01-30 10:03:46.142481439 +0000 UTC m=+5631.114792990" observedRunningTime="2026-01-30 10:03:46.689007996 +0000 UTC m=+5631.661319557" watchObservedRunningTime="2026-01-30 10:03:46.694517899 +0000 UTC m=+5631.666829450" Jan 30 10:03:50 crc kubenswrapper[4758]: I0130 10:03:50.768378 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:03:50 crc kubenswrapper[4758]: E0130 10:03:50.768875 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:03:52 crc kubenswrapper[4758]: I0130 10:03:52.149335 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:52 crc kubenswrapper[4758]: I0130 10:03:52.149665 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:52 crc kubenswrapper[4758]: I0130 10:03:52.193841 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:52 crc kubenswrapper[4758]: I0130 10:03:52.774320 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:52 crc kubenswrapper[4758]: I0130 10:03:52.817347 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vgrtm"] Jan 30 10:03:54 crc kubenswrapper[4758]: I0130 10:03:54.750053 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vgrtm" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerName="registry-server" containerID="cri-o://a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112" gracePeriod=2 Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.204859 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.260412 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-utilities\") pod \"7ff04746-69db-463d-8b3e-23b69fa6a995\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.260765 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-catalog-content\") pod \"7ff04746-69db-463d-8b3e-23b69fa6a995\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.261264 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-utilities" (OuterVolumeSpecName: "utilities") pod "7ff04746-69db-463d-8b3e-23b69fa6a995" (UID: "7ff04746-69db-463d-8b3e-23b69fa6a995"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.282118 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5dks\" (UniqueName: \"kubernetes.io/projected/7ff04746-69db-463d-8b3e-23b69fa6a995-kube-api-access-t5dks\") pod \"7ff04746-69db-463d-8b3e-23b69fa6a995\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.288382 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.306314 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff04746-69db-463d-8b3e-23b69fa6a995-kube-api-access-t5dks" (OuterVolumeSpecName: "kube-api-access-t5dks") pod "7ff04746-69db-463d-8b3e-23b69fa6a995" (UID: "7ff04746-69db-463d-8b3e-23b69fa6a995"). InnerVolumeSpecName "kube-api-access-t5dks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.390382 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5dks\" (UniqueName: \"kubernetes.io/projected/7ff04746-69db-463d-8b3e-23b69fa6a995-kube-api-access-t5dks\") on node \"crc\" DevicePath \"\"" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.435988 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ff04746-69db-463d-8b3e-23b69fa6a995" (UID: "7ff04746-69db-463d-8b3e-23b69fa6a995"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.492400 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.760938 4758 generic.go:334] "Generic (PLEG): container finished" podID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerID="a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112" exitCode=0 Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.760989 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgrtm" event={"ID":"7ff04746-69db-463d-8b3e-23b69fa6a995","Type":"ContainerDied","Data":"a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112"} Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.761013 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgrtm" event={"ID":"7ff04746-69db-463d-8b3e-23b69fa6a995","Type":"ContainerDied","Data":"b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d"} Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.761030 4758 scope.go:117] "RemoveContainer" containerID="a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.761171 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.796322 4758 scope.go:117] "RemoveContainer" containerID="8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.851199 4758 scope.go:117] "RemoveContainer" containerID="fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.908388 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vgrtm"] Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.921338 4758 scope.go:117] "RemoveContainer" containerID="a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112" Jan 30 10:03:55 crc kubenswrapper[4758]: E0130 10:03:55.923552 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112\": container with ID starting with a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112 not found: ID does not exist" containerID="a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.923603 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112"} err="failed to get container status \"a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112\": rpc error: code = NotFound desc = could not find container \"a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112\": container with ID starting with a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112 not found: ID does not exist" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.923649 4758 scope.go:117] "RemoveContainer" containerID="8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21" Jan 30 10:03:55 crc kubenswrapper[4758]: E0130 10:03:55.924835 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21\": container with ID starting with 8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21 not found: ID does not exist" containerID="8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.924864 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21"} err="failed to get container status \"8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21\": rpc error: code = NotFound desc = could not find container \"8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21\": container with ID starting with 8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21 not found: ID does not exist" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.924882 4758 scope.go:117] "RemoveContainer" containerID="fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859" Jan 30 10:03:55 crc kubenswrapper[4758]: E0130 10:03:55.925467 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859\": container with ID starting with fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859 not found: ID does not exist" containerID="fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.925489 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859"} err="failed to get container status \"fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859\": rpc error: code = NotFound desc = could not find container \"fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859\": container with ID starting with fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859 not found: ID does not exist" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.926817 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vgrtm"] Jan 30 10:03:57 crc kubenswrapper[4758]: I0130 10:03:57.781793 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" path="/var/lib/kubelet/pods/7ff04746-69db-463d-8b3e-23b69fa6a995/volumes" Jan 30 10:04:01 crc kubenswrapper[4758]: E0130 10:04:01.365120 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice/crio-b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice\": RecentStats: unable to find data in memory cache]" Jan 30 10:04:01 crc kubenswrapper[4758]: I0130 10:04:01.769605 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:04:01 crc kubenswrapper[4758]: E0130 10:04:01.769930 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:04:11 crc kubenswrapper[4758]: E0130 10:04:11.613334 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice/crio-b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice\": RecentStats: unable to find data in memory cache]" Jan 30 10:04:11 crc kubenswrapper[4758]: I0130 10:04:11.838131 4758 scope.go:117] "RemoveContainer" containerID="5ec49f03ce30f39c620b596bdc2dedc28bf6a7cfab662959f3b4adf09c9a537d" Jan 30 10:04:15 crc kubenswrapper[4758]: I0130 10:04:15.774978 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:04:15 crc kubenswrapper[4758]: E0130 10:04:15.777369 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:04:21 crc kubenswrapper[4758]: E0130 10:04:21.863535 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice/crio-b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d\": RecentStats: unable to find data in memory cache]" Jan 30 10:04:27 crc kubenswrapper[4758]: I0130 10:04:27.771546 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:04:27 crc kubenswrapper[4758]: E0130 10:04:27.772299 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:04:32 crc kubenswrapper[4758]: E0130 10:04:32.115748 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice/crio-b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice\": RecentStats: unable to find data in memory cache]" Jan 30 10:04:40 crc kubenswrapper[4758]: I0130 10:04:40.769196 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:04:40 crc kubenswrapper[4758]: E0130 10:04:40.769958 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:04:42 crc kubenswrapper[4758]: E0130 10:04:42.371448 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice/crio-b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice\": RecentStats: unable to find data in memory cache]" Jan 30 10:04:52 crc kubenswrapper[4758]: E0130 10:04:52.604510 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice/crio-b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d\": RecentStats: unable to find data in memory cache]" Jan 30 10:04:52 crc kubenswrapper[4758]: I0130 10:04:52.769092 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:04:52 crc kubenswrapper[4758]: E0130 10:04:52.769351 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:05:03 crc kubenswrapper[4758]: I0130 10:05:03.768296 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:05:03 crc kubenswrapper[4758]: E0130 10:05:03.769014 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:05:11 crc kubenswrapper[4758]: I0130 10:05:11.904136 4758 scope.go:117] "RemoveContainer" containerID="d5124596fab9753fd484ec6efd38bf02b9c0c9ce543d442c2f18b786111542c3" Jan 30 10:05:13 crc kubenswrapper[4758]: I0130 10:05:13.482928 4758 generic.go:334] "Generic (PLEG): container finished" podID="bec5515f-517d-441a-8d27-381128c9cbe3" containerID="a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee" exitCode=0 Jan 30 10:05:13 crc kubenswrapper[4758]: I0130 10:05:13.483429 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/must-gather-qpckg" event={"ID":"bec5515f-517d-441a-8d27-381128c9cbe3","Type":"ContainerDied","Data":"a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee"} Jan 30 10:05:13 crc kubenswrapper[4758]: I0130 10:05:13.484096 4758 scope.go:117] "RemoveContainer" containerID="a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee" Jan 30 10:05:13 crc kubenswrapper[4758]: I0130 10:05:13.888221 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lpqrm_must-gather-qpckg_bec5515f-517d-441a-8d27-381128c9cbe3/gather/0.log" Jan 30 10:05:16 crc kubenswrapper[4758]: I0130 10:05:16.768779 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:05:16 crc kubenswrapper[4758]: E0130 10:05:16.769392 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:05:21 crc kubenswrapper[4758]: I0130 10:05:21.971123 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lpqrm/must-gather-qpckg"] Jan 30 10:05:21 crc kubenswrapper[4758]: I0130 10:05:21.972006 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-lpqrm/must-gather-qpckg" podUID="bec5515f-517d-441a-8d27-381128c9cbe3" containerName="copy" containerID="cri-o://5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec" gracePeriod=2 Jan 30 10:05:21 crc kubenswrapper[4758]: I0130 10:05:21.989227 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lpqrm/must-gather-qpckg"] Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.520111 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lpqrm_must-gather-qpckg_bec5515f-517d-441a-8d27-381128c9cbe3/copy/0.log" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.521109 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.578913 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lpqrm_must-gather-qpckg_bec5515f-517d-441a-8d27-381128c9cbe3/copy/0.log" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.579697 4758 generic.go:334] "Generic (PLEG): container finished" podID="bec5515f-517d-441a-8d27-381128c9cbe3" containerID="5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec" exitCode=143 Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.579742 4758 scope.go:117] "RemoveContainer" containerID="5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.579855 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.609575 4758 scope.go:117] "RemoveContainer" containerID="a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.630092 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bec5515f-517d-441a-8d27-381128c9cbe3-must-gather-output\") pod \"bec5515f-517d-441a-8d27-381128c9cbe3\" (UID: \"bec5515f-517d-441a-8d27-381128c9cbe3\") " Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.630183 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7d4q\" (UniqueName: \"kubernetes.io/projected/bec5515f-517d-441a-8d27-381128c9cbe3-kube-api-access-b7d4q\") pod \"bec5515f-517d-441a-8d27-381128c9cbe3\" (UID: \"bec5515f-517d-441a-8d27-381128c9cbe3\") " Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.637481 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec5515f-517d-441a-8d27-381128c9cbe3-kube-api-access-b7d4q" (OuterVolumeSpecName: "kube-api-access-b7d4q") pod "bec5515f-517d-441a-8d27-381128c9cbe3" (UID: "bec5515f-517d-441a-8d27-381128c9cbe3"). InnerVolumeSpecName "kube-api-access-b7d4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.680722 4758 scope.go:117] "RemoveContainer" containerID="5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec" Jan 30 10:05:22 crc kubenswrapper[4758]: E0130 10:05:22.681219 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec\": container with ID starting with 5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec not found: ID does not exist" containerID="5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.681248 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec"} err="failed to get container status \"5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec\": rpc error: code = NotFound desc = could not find container \"5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec\": container with ID starting with 5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec not found: ID does not exist" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.681266 4758 scope.go:117] "RemoveContainer" containerID="a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee" Jan 30 10:05:22 crc kubenswrapper[4758]: E0130 10:05:22.684081 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee\": container with ID starting with a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee not found: ID does not exist" containerID="a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.684114 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee"} err="failed to get container status \"a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee\": rpc error: code = NotFound desc = could not find container \"a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee\": container with ID starting with a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee not found: ID does not exist" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.732698 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7d4q\" (UniqueName: \"kubernetes.io/projected/bec5515f-517d-441a-8d27-381128c9cbe3-kube-api-access-b7d4q\") on node \"crc\" DevicePath \"\"" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.842358 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bec5515f-517d-441a-8d27-381128c9cbe3-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bec5515f-517d-441a-8d27-381128c9cbe3" (UID: "bec5515f-517d-441a-8d27-381128c9cbe3"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.938750 4758 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bec5515f-517d-441a-8d27-381128c9cbe3-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 10:05:23 crc kubenswrapper[4758]: I0130 10:05:23.785427 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec5515f-517d-441a-8d27-381128c9cbe3" path="/var/lib/kubelet/pods/bec5515f-517d-441a-8d27-381128c9cbe3/volumes" Jan 30 10:05:28 crc kubenswrapper[4758]: I0130 10:05:28.768605 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:05:28 crc kubenswrapper[4758]: E0130 10:05:28.769350 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:05:42 crc kubenswrapper[4758]: I0130 10:05:42.769525 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:05:42 crc kubenswrapper[4758]: E0130 10:05:42.770942 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:05:57 crc kubenswrapper[4758]: I0130 10:05:57.768830 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:05:57 crc kubenswrapper[4758]: E0130 10:05:57.769770 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:06:11 crc kubenswrapper[4758]: I0130 10:06:11.768950 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:06:11 crc kubenswrapper[4758]: E0130 10:06:11.769882 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:06:25 crc kubenswrapper[4758]: I0130 10:06:25.774358 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:06:25 crc kubenswrapper[4758]: E0130 10:06:25.775199 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:06:36 crc kubenswrapper[4758]: I0130 10:06:36.769063 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:06:36 crc kubenswrapper[4758]: E0130 10:06:36.770848 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.867605 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ljdcd"] Jan 30 10:06:47 crc kubenswrapper[4758]: E0130 10:06:47.868993 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerName="registry-server" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.869010 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerName="registry-server" Jan 30 10:06:47 crc kubenswrapper[4758]: E0130 10:06:47.869051 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec5515f-517d-441a-8d27-381128c9cbe3" containerName="copy" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.869060 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec5515f-517d-441a-8d27-381128c9cbe3" containerName="copy" Jan 30 10:06:47 crc kubenswrapper[4758]: E0130 10:06:47.869075 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerName="extract-utilities" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.869085 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerName="extract-utilities" Jan 30 10:06:47 crc kubenswrapper[4758]: E0130 10:06:47.869096 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec5515f-517d-441a-8d27-381128c9cbe3" containerName="gather" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.869104 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec5515f-517d-441a-8d27-381128c9cbe3" containerName="gather" Jan 30 10:06:47 crc kubenswrapper[4758]: E0130 10:06:47.869127 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerName="extract-content" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.869134 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerName="extract-content" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.896018 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec5515f-517d-441a-8d27-381128c9cbe3" containerName="gather" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.896131 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec5515f-517d-441a-8d27-381128c9cbe3" containerName="copy" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.896197 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerName="registry-server" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.898553 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.905790 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljdcd"] Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.952225 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqs8m\" (UniqueName: \"kubernetes.io/projected/b5682a35-e6fb-4ecf-883c-796b4d1988c0-kube-api-access-pqs8m\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.952275 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-utilities\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.952307 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-catalog-content\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:48 crc kubenswrapper[4758]: I0130 10:06:48.054620 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqs8m\" (UniqueName: \"kubernetes.io/projected/b5682a35-e6fb-4ecf-883c-796b4d1988c0-kube-api-access-pqs8m\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:48 crc kubenswrapper[4758]: I0130 10:06:48.054682 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-utilities\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:48 crc kubenswrapper[4758]: I0130 10:06:48.054722 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-catalog-content\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:48 crc kubenswrapper[4758]: I0130 10:06:48.055321 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-utilities\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:48 crc kubenswrapper[4758]: I0130 10:06:48.055448 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-catalog-content\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:48 crc kubenswrapper[4758]: I0130 10:06:48.078773 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqs8m\" (UniqueName: \"kubernetes.io/projected/b5682a35-e6fb-4ecf-883c-796b4d1988c0-kube-api-access-pqs8m\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:48 crc kubenswrapper[4758]: I0130 10:06:48.226844 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:48 crc kubenswrapper[4758]: I0130 10:06:48.587517 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljdcd"] Jan 30 10:06:49 crc kubenswrapper[4758]: I0130 10:06:49.423662 4758 generic.go:334] "Generic (PLEG): container finished" podID="b5682a35-e6fb-4ecf-883c-796b4d1988c0" containerID="b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3" exitCode=0 Jan 30 10:06:49 crc kubenswrapper[4758]: I0130 10:06:49.423876 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdcd" event={"ID":"b5682a35-e6fb-4ecf-883c-796b4d1988c0","Type":"ContainerDied","Data":"b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3"} Jan 30 10:06:49 crc kubenswrapper[4758]: I0130 10:06:49.423913 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdcd" event={"ID":"b5682a35-e6fb-4ecf-883c-796b4d1988c0","Type":"ContainerStarted","Data":"4d5d1d421b45d8382306b2113710c6bccea1f4dd16f411ccd63e1d217fcd85c4"} Jan 30 10:06:50 crc kubenswrapper[4758]: I0130 10:06:50.768129 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:06:50 crc kubenswrapper[4758]: E0130 10:06:50.768364 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:06:51 crc kubenswrapper[4758]: I0130 10:06:51.443595 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdcd" event={"ID":"b5682a35-e6fb-4ecf-883c-796b4d1988c0","Type":"ContainerStarted","Data":"22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820"} Jan 30 10:06:53 crc kubenswrapper[4758]: I0130 10:06:53.463847 4758 generic.go:334] "Generic (PLEG): container finished" podID="b5682a35-e6fb-4ecf-883c-796b4d1988c0" containerID="22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820" exitCode=0 Jan 30 10:06:53 crc kubenswrapper[4758]: I0130 10:06:53.463915 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdcd" event={"ID":"b5682a35-e6fb-4ecf-883c-796b4d1988c0","Type":"ContainerDied","Data":"22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820"} Jan 30 10:06:54 crc kubenswrapper[4758]: I0130 10:06:54.475961 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdcd" event={"ID":"b5682a35-e6fb-4ecf-883c-796b4d1988c0","Type":"ContainerStarted","Data":"9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4"} Jan 30 10:06:54 crc kubenswrapper[4758]: I0130 10:06:54.502741 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ljdcd" podStartSLOduration=3.053154988 podStartE2EDuration="7.502724725s" podCreationTimestamp="2026-01-30 10:06:47 +0000 UTC" firstStartedPulling="2026-01-30 10:06:49.425628595 +0000 UTC m=+5814.397940146" lastFinishedPulling="2026-01-30 10:06:53.875198332 +0000 UTC m=+5818.847509883" observedRunningTime="2026-01-30 10:06:54.497136449 +0000 UTC m=+5819.469448000" watchObservedRunningTime="2026-01-30 10:06:54.502724725 +0000 UTC m=+5819.475036276" Jan 30 10:06:58 crc kubenswrapper[4758]: I0130 10:06:58.227073 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:58 crc kubenswrapper[4758]: I0130 10:06:58.227675 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:58 crc kubenswrapper[4758]: I0130 10:06:58.280914 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:07:04 crc kubenswrapper[4758]: I0130 10:07:04.768994 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:07:04 crc kubenswrapper[4758]: E0130 10:07:04.769560 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:07:08 crc kubenswrapper[4758]: I0130 10:07:08.289836 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:07:08 crc kubenswrapper[4758]: I0130 10:07:08.344305 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ljdcd"] Jan 30 10:07:08 crc kubenswrapper[4758]: I0130 10:07:08.612991 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ljdcd" podUID="b5682a35-e6fb-4ecf-883c-796b4d1988c0" containerName="registry-server" containerID="cri-o://9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4" gracePeriod=2 Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.101744 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.254163 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-catalog-content\") pod \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.254628 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqs8m\" (UniqueName: \"kubernetes.io/projected/b5682a35-e6fb-4ecf-883c-796b4d1988c0-kube-api-access-pqs8m\") pod \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.254737 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-utilities\") pod \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.255813 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-utilities" (OuterVolumeSpecName: "utilities") pod "b5682a35-e6fb-4ecf-883c-796b4d1988c0" (UID: "b5682a35-e6fb-4ecf-883c-796b4d1988c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.261402 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5682a35-e6fb-4ecf-883c-796b4d1988c0-kube-api-access-pqs8m" (OuterVolumeSpecName: "kube-api-access-pqs8m") pod "b5682a35-e6fb-4ecf-883c-796b4d1988c0" (UID: "b5682a35-e6fb-4ecf-883c-796b4d1988c0"). InnerVolumeSpecName "kube-api-access-pqs8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.319230 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5682a35-e6fb-4ecf-883c-796b4d1988c0" (UID: "b5682a35-e6fb-4ecf-883c-796b4d1988c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.357133 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.357166 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqs8m\" (UniqueName: \"kubernetes.io/projected/b5682a35-e6fb-4ecf-883c-796b4d1988c0-kube-api-access-pqs8m\") on node \"crc\" DevicePath \"\"" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.357177 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.623556 4758 generic.go:334] "Generic (PLEG): container finished" podID="b5682a35-e6fb-4ecf-883c-796b4d1988c0" containerID="9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4" exitCode=0 Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.623602 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdcd" event={"ID":"b5682a35-e6fb-4ecf-883c-796b4d1988c0","Type":"ContainerDied","Data":"9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4"} Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.623632 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdcd" event={"ID":"b5682a35-e6fb-4ecf-883c-796b4d1988c0","Type":"ContainerDied","Data":"4d5d1d421b45d8382306b2113710c6bccea1f4dd16f411ccd63e1d217fcd85c4"} Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.623643 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.623651 4758 scope.go:117] "RemoveContainer" containerID="9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.642669 4758 scope.go:117] "RemoveContainer" containerID="22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.663029 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ljdcd"] Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.673415 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ljdcd"] Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.678769 4758 scope.go:117] "RemoveContainer" containerID="b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.726238 4758 scope.go:117] "RemoveContainer" containerID="9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4" Jan 30 10:07:09 crc kubenswrapper[4758]: E0130 10:07:09.726758 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4\": container with ID starting with 9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4 not found: ID does not exist" containerID="9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.726807 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4"} err="failed to get container status \"9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4\": rpc error: code = NotFound desc = could not find container \"9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4\": container with ID starting with 9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4 not found: ID does not exist" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.726833 4758 scope.go:117] "RemoveContainer" containerID="22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820" Jan 30 10:07:09 crc kubenswrapper[4758]: E0130 10:07:09.727176 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820\": container with ID starting with 22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820 not found: ID does not exist" containerID="22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.727304 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820"} err="failed to get container status \"22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820\": rpc error: code = NotFound desc = could not find container \"22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820\": container with ID starting with 22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820 not found: ID does not exist" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.727421 4758 scope.go:117] "RemoveContainer" containerID="b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3" Jan 30 10:07:09 crc kubenswrapper[4758]: E0130 10:07:09.727772 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3\": container with ID starting with b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3 not found: ID does not exist" containerID="b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.728049 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3"} err="failed to get container status \"b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3\": rpc error: code = NotFound desc = could not find container \"b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3\": container with ID starting with b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3 not found: ID does not exist" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.777611 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5682a35-e6fb-4ecf-883c-796b4d1988c0" path="/var/lib/kubelet/pods/b5682a35-e6fb-4ecf-883c-796b4d1988c0/volumes" Jan 30 10:07:18 crc kubenswrapper[4758]: I0130 10:07:18.768562 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:07:18 crc kubenswrapper[4758]: E0130 10:07:18.769686 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:07:31 crc kubenswrapper[4758]: I0130 10:07:31.769066 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:07:31 crc kubenswrapper[4758]: E0130 10:07:31.769965 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:07:44 crc kubenswrapper[4758]: I0130 10:07:44.768812 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:07:44 crc kubenswrapper[4758]: E0130 10:07:44.770712 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:07:59 crc kubenswrapper[4758]: I0130 10:07:59.768437 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:07:59 crc kubenswrapper[4758]: E0130 10:07:59.769284 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:08:14 crc kubenswrapper[4758]: I0130 10:08:14.768463 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:08:14 crc kubenswrapper[4758]: E0130 10:08:14.769411 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:08:26 crc kubenswrapper[4758]: I0130 10:08:26.769320 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:08:27 crc kubenswrapper[4758]: I0130 10:08:27.266259 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"e6183904969d375ed1d6878bb833f4097e19af4109cfd0f6282a50a22b32e986"}